<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Mixinet BlogOps</title>
    <link>https://blogops.mixinet.net/</link>
    <description>Recent content on Mixinet BlogOps</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <copyright>&amp;copy; 2022-2025 Sergio Talens-Oliag.
</copyright>
    <lastBuildDate>Thu, 23 Apr 2026 19:40:00 +0200</lastBuildDate><atom:link href="https://blogops.mixinet.net/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Developing a Git Worktree Helper with Copilot</title>
      <link>https://blogops.mixinet.net/posts/developing_a_git_worktree_helper_with_copilot/</link>
      <pubDate>Thu, 23 Apr 2026 19:40:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/developing_a_git_worktree_helper_with_copilot/</guid>
      <description>&lt;section id=&#34;preamble&#34; aria-label=&#34;Preamble&#34;&gt;&lt;p&gt;Over the past few weeks I’ve been developing and using a personal command-line
tool called &lt;code&gt;gwt&lt;/code&gt; (&lt;em&gt;Git Worktree&lt;/em&gt;) to manage Git repositories using worktrees.
This article explains what the tool does, how it evolved, and how I used
&lt;a href=&#34;https://github.com/features/copilot/cli/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;GitHub Copilot CLI&lt;/a&gt; to develop it (in
fact the idea of building the script was also to test the tool).&lt;/p&gt;&lt;/section&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_the_problem_managing_multiple_branches&#34;&gt;The Problem: Managing Multiple Branches&lt;/h2&gt;&lt;p&gt;I was working on a project with multiple active branches, including orphans; the
regular branches are for fixes or features, while the orphans are used to keep
copies of remote documents or store processed versions of those documents.&lt;/p&gt;
&lt;p&gt;The project also uses a special orphan branch that contains the scripts and the
CI/CD configuration to store and process the external documents (it is on a
separate branch to avoid mixing its operation with the main project code).&lt;/p&gt;
&lt;p&gt;The plan is trigger a pipeline against the special branch from remote projects
to create or update the doc branch for it in our git repository, retrieving
artifacts from the remote projects to get the files and put them on an orphan
branch (initially I added new commits after each update, but I changed the
system to use force pushes and keep only one commit, as the history is not
really needed).&lt;/p&gt;
&lt;p&gt;The original documents have to be changed, so, after ingesting them, we run a
script that modifies them and adds or updates another branch with the processed
version; the contents of that branch are used by the &lt;code&gt;main&lt;/code&gt; branch build process
(there we use &lt;code&gt;git fetch&lt;/code&gt; and &lt;code&gt;git archive&lt;/code&gt; to retrieve its contents).&lt;/p&gt;
&lt;p&gt;When working on the scripts to manage the orphan branches I discovered the
&lt;a href=&#34;https://git-scm.com/docs/git-worktree&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;worktree&lt;/a&gt; feature of &lt;code&gt;git&lt;/code&gt;, a
functionality that allows me to keep multiple branches checked out in parallel
using a single &lt;code&gt;.git&lt;/code&gt; folder, removing the need to use &lt;code&gt;git switch&lt;/code&gt; and &lt;code&gt;git
stash&lt;/code&gt; when changing between branches (until now I’ve been a heavy user of those
commands).&lt;/p&gt;
&lt;p&gt;Reading about it I found that a lot of people use worktrees with the help of a
wrapper script to simplify the management. After looking at one or two posts
and the related scripts I decided to create my own using a specific directory
structure to simplify things.&lt;/p&gt;
&lt;p&gt;That’s how I started to work on the &lt;code&gt;gwt&lt;/code&gt; script; as I also wanted to test
&lt;code&gt;copilot&lt;/code&gt; I decided to build it using its help (I have a pro license at work and
wanted to play with the cli version instead of integrated into an editor, as I
didn’t want to learn a lot of new keyboard shortcuts).&lt;/p&gt;&lt;/section&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_the_gwt_philosophy_opinionated_and_transparent&#34;&gt;The gwt Philosophy: Opinionated and Transparent&lt;/h2&gt;&lt;p&gt;&lt;code&gt;gwt&lt;/code&gt; enforces a simple, filesystem-visible model:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Exactly one bare repository&lt;/strong&gt; named &lt;code&gt;bare.git&lt;/code&gt; (treated as an implementation
detail)&lt;/li&gt;&lt;li&gt;&lt;strong&gt;One worktree directory per branch&lt;/strong&gt; where the directory name matches the
branch name&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Single responsibility&lt;/strong&gt;: &lt;code&gt;gwt&lt;/code&gt; doesn’t try to be a general &lt;code&gt;git&lt;/code&gt; wrapper; it
only handles operations that map cleanly to this layout&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<section id="preamble" aria-label="Preamble"><p>Over the past few weeks I’ve been developing and using a personal command-line
tool called <code>gwt</code> (<em>Git Worktree</em>) to manage Git repositories using worktrees.
This article explains what the tool does, how it evolved, and how I used
<a href="https://github.com/features/copilot/cli/" target="_blank" rel="noopener">GitHub Copilot CLI</a> to develop it (in
fact the idea of building the script was also to test the tool).</p></section>
<section class="doc-section level-1"><h2 id="_the_problem_managing_multiple_branches">The Problem: Managing Multiple Branches</h2><p>I was working on a project with multiple active branches, including orphans; the
regular branches are for fixes or features, while the orphans are used to keep
copies of remote documents or store processed versions of those documents.</p>
<p>The project also uses a special orphan branch that contains the scripts and the
CI/CD configuration to store and process the external documents (it is on a
separate branch to avoid mixing its operation with the main project code).</p>
<p>The plan is trigger a pipeline against the special branch from remote projects
to create or update the doc branch for it in our git repository, retrieving
artifacts from the remote projects to get the files and put them on an orphan
branch (initially I added new commits after each update, but I changed the
system to use force pushes and keep only one commit, as the history is not
really needed).</p>
<p>The original documents have to be changed, so, after ingesting them, we run a
script that modifies them and adds or updates another branch with the processed
version; the contents of that branch are used by the <code>main</code> branch build process
(there we use <code>git fetch</code> and <code>git archive</code> to retrieve its contents).</p>
<p>When working on the scripts to manage the orphan branches I discovered the
<a href="https://git-scm.com/docs/git-worktree" target="_blank" rel="noopener">worktree</a> feature of <code>git</code>, a
functionality that allows me to keep multiple branches checked out in parallel
using a single <code>.git</code> folder, removing the need to use <code>git switch</code> and <code>git
stash</code> when changing between branches (until now I’ve been a heavy user of those
commands).</p>
<p>Reading about it I found that a lot of people use worktrees with the help of a
wrapper script to simplify the management. After looking at one or two posts
and the related scripts I decided to create my own using a specific directory
structure to simplify things.</p>
<p>That’s how I started to work on the <code>gwt</code> script; as I also wanted to test
<code>copilot</code> I decided to build it using its help (I have a pro license at work and
wanted to play with the cli version instead of integrated into an editor, as I
didn’t want to learn a lot of new keyboard shortcuts).</p></section>
<section class="doc-section level-1"><h2 id="_the_gwt_philosophy_opinionated_and_transparent">The gwt Philosophy: Opinionated and Transparent</h2><p><code>gwt</code> enforces a simple, filesystem-visible model:</p>
<div class="ulist"><ul><li><strong>Exactly one bare repository</strong> named <code>bare.git</code> (treated as an implementation
detail)</li><li><strong>One worktree directory per branch</strong> where the directory name matches the
branch name</li><li><strong>Single responsibility</strong>: <code>gwt</code> doesn’t try to be a general <code>git</code> wrapper; it
only handles operations that map cleanly to this layout</li></ul></div>
<p>The repository structure looks like this:</p>
<div class="listing-block"><pre class="highlight"><code>my-repo/
+-- bare.git/           # the Git repository (internal)
+-- main/               # worktree for branch &#34;main&#34;
+-- feature/api/        # worktree for branch &#34;feature/api&#34;
+-- fix/docs/           # worktree for branch &#34;fix/docs&#34;
+-- orphan-history/     # worktree for the &#34;orphan-history&#34; branch</code></pre></div>
<p>The tool follows five core design principles:</p>
<div class="olist arabic"><ol class="arabic"><li><strong>Explicit over clever</strong>: Git commands are not hidden or reinterpreted</li><li><strong>Transparent execution</strong>: Every operation is printed before it happens</li><li><strong>Safe, preview-first operations</strong>: Destructive commands default to preview,
confirmation, then apply</li><li><strong>Shell-agnostic core</strong>: The script never changes the caller’s working
directory (shell wrappers handle that)</li><li><strong>Opinionated but minimal</strong>: Only commands that fit the layout model are
included</li></ol></div></section>
<section class="doc-section level-1"><h2 id="_core_commands">Core Commands</h2><p>The script provides these essential commands:</p>
<div class="ulist"><ul><li><code>gwt init &lt;url&gt;</code> — Clone a repository and set up the <code>gwt</code> layout</li><li><code>gwt convert &lt;dir&gt;</code> — Convert an existing Git checkout to the <code>gwt</code> layout</li><li><code>gwt add [--orphan] &lt;branch&gt; [&lt;base&gt;]</code> — Create a new worktree (optionally
orphaned)</li><li><code>gwt remove &lt;branch&gt;</code> — Remove a worktree and unregister it (asks the user to
remove the local branch too, useful when removing already merged branches)</li><li><code>gwt rename &lt;old&gt; &lt;new&gt;</code> — Rename a branch AND its worktree directory</li><li><code>gwt list</code> — List all worktrees</li><li><code>gwt default [&lt;branch&gt;]</code> — Get or set the default branch</li><li><code>gwt current</code> — Print the current worktree or branch name</li></ul></div>
<p>Except <code>init</code> and <code>convert</code> all of the commands work inside a directory
structure that follows the <code>gwt</code> layout, which looks for the <code>bare.git</code> folder to
find the root folder of the structure.</p>
<p>As I don’t want to hide which commands are really used by the wrapper, all <code>git</code>
and filesystem operations pass through a single <code>run</code> shell function that prints
each command before executing it. This gives complete visibility into what the
tool is doing.</p>
<p>Also, destructive operations (<code>remove</code>, <code>rename</code>) default to preview mode:</p>
<div class="listing-block"><pre class="highlight"><code class="language-shell" data-lang="shell">$ gwt remove feature-old --dry-run

+ git -C bare.git branch -d feature-old
+ git -C bare.git worktree remove feature-old/

Apply these changes? [y/N]:</code></pre></div>
<p>The user sees exactly what will happen, can verify it’s correct, and only then
confirm execution.</p></section>
<section class="doc-section level-1"><h2 id="_incremental_development_with_copilot">Incremental Development with Copilot</h2><p>The <code>gwt</code> script has grown from 597 lines in its original version (<code>git-wt</code>) to
1,111 lines when writing the first draft of this post.</p>
<p>This growth happened through incremental, test-driven development, with each
feature being refined based on real usage patterns.</p>
<p>What follows is a little history of the script evolution written with the help
of <code>git log</code>.</p>
<section class="doc-section level-2"><h3 id="_initial_version">Initial version</h3><p>First I wrote a design document and asked <code>copilot</code> to create the initial
version of the <code>git-wt</code> script with the original core commands.</p>
<p>I started to use the tool with a remote repostory (I made copies of the branches
in some cases to avoid missing work) and fixed bugs (trivial ones with <code>neovim</code>,
larger ones asking <code>copilot</code> to fix the issues for me, so I had less typing to
do).</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>As I used <code>copilot</code> I noticed that when you make manual changes it is important
to tell the tool about them, otherwise it gets confused and sometimes tries to
remove manual changes.</p></aside></section>
<section class="doc-section level-2"><h3 id="_first_command_update">First command update</h3><p>One of the first commands I had to enhance was <code>rename</code>:</p>
<div class="ulist"><ul><li>as I normally use branches with <code>/</code> on their name and my tool checks out the
<em>worktrees</em> using the branch name as the path inside the <code>gwt</code> root folder
(i.e. a <code>fix/rename</code> branch creates the <code>fix</code> directory and checks the branch
inside the <code>fix/rename</code> folder) the <code>rename</code> command had to clean up the empty
parent directories</li><li>when renaming a worktree we move the folders and fix the references using the
<code>worktree repair</code> command to make things work locally, but the rename also
affects the remote branch reference, to avoid surprises the command unsets the
remote branch reference so it can be pushed again using the new name (of
course, the user is responsible of managing the old remote branch, as the
<code>gwt</code> can’t guess what it should do with it).</li></ul></div></section>
<section class="doc-section level-2"><h3 id="_integration_with_the_shell">Integration with the shell</h3><p>As I use <code>zsh</code> with the <a href="https://github.com/romkatv/powerlevel10k" target="_blank" rel="noopener">Powerlevel10k
theme</a> I asked <code>copilot</code> to help me add visual elements to the prompt when
working with <code>gwt</code> folders, something that I would have never tried without
help, as it would have required a lot of digging on my part on how to do it, as
I never looked into it.</p>
<p>The initial version of the code was on an independent file that I sourced from
my <code>.zshrc</code> file and it prints <code><i class="fa fa-tags"></i></code> on the right part of the prompt
when we are inside a <code>gwt</code> folder (note that if the folder is a worktree we see
the existing git integration text right before it, so we have the previous
behavior and we see that it is a <code>gwt</code> friendly repo) and if we are on the root
folder or the <code>bare.git</code> folder we see <code><i class="fa fa-tags"></i> gwt</code> or <code><i class="fa fa-tags"></i> bare</code>
(I added the text because there are no git promts on those folders).</p>
<p>I also asked <code>copilot</code> to create <code>zsh</code> autocompletion functions (I only use
<code>zsh</code>, so I didn’t add autocompletion for other shells). The good thing here is
that I wouldn’t have done that manually, as it would have required some reading
to get it right, but the output of <code>copilot</code> worked and I can update things
using it or manually if I need to.</p>
<p>One thing I was missing from the script was the possibility of changing the
working directory easily, so I wrote a <code>gwt</code> wrapper function for <code>zsh</code> that
intercepts commands that require shell cooperation (changing the working
directory) and delegates everything else to the core script.</p>
<p>Currently the function supports the following enhanced commands:</p>
<div class="ulist"><ul><li><code>cd [&lt;branch&gt;]</code>: change into a worktree or the default one if missing</li><li><code>convert &lt;dir&gt;</code>: convert a checkout, then cd into the initial worktree</li><li><code>add [--orphan] &lt;branch&gt; [&lt;base&gt;]</code>: create a worktree, then cd into it on success</li><li><code>rename &lt;old&gt; &lt;new&gt;</code>: rename a worktree, then cd into it if we were inside it</li></ul></div>
<p>Note that the <code>cd</code> command will not work on other shells or if the user does not
load my wrapper, but the rest will still work without the working directory
changes.</p></section>
<section class="doc-section level-2"><h3 id="_renaming_the_command">Renaming the command</h3><p>As I felt that <code>git-wt</code> was a long name I renamed the tool to <code>gwt</code>, I could
have done it by hand, but using <code>copilot</code> I didn’t have to review all files by
myself and it did it right (note that I have it configured to always ask me
before doing changes, as it sometimes tries to do something I don’t want and I
like to check its changes …​ as I have the files in git repos, I manually add
the files when I like the status and if the cli output is not clear I allow it
to apply it and check the effects with <code>git diff</code> so I can validate or revert
what was done).</p></section>
<section class="doc-section level-2"><h3 id="_the_convert_command">The <code>convert</code> command</h3><p>After playing with one repo I added the <code>convert</code> subcommand for migrating
existing checkouts, it seemed a simple task at first, but it took multiple
iterations to get it right, as I found multiple issues while testing (in fact I
did copies of the existing checkouts to be able to re-test each update, as some
of the iterations broke them).</p>
<p>The version of the function when this post was first edited had the following
comment explaining what it does:</p>
<div class="listing-block"><pre class="highlight"><code class="language-shell" data-lang="shell"># ---------------------------------------------------------------------------
# convert - convert an existing checkout into the gwt layout
# ---------------------------------------------------------------------------
#
# Must be run from the parent directory of &lt;dir&gt;.
#
# Steps:
#   1. Read branch from the checkout&#39;s HEAD
#   2. Rename &lt;dir&gt; to &lt;dir&gt;.wt.tmp (sibling, same filesystem)
#   3. Create &lt;dir&gt;/ as the new gwt root
#   4. Move &lt;dir&gt;.wt.tmp/.git to &lt;dir&gt;/bare.git; set core.bare = true
#   5. Fix fetch refspec (bare clone default maps refs directly, no remotes/)
#   6. Add a --no-checkout worktree so git wires up the metadata and
#      creates &lt;dir&gt;/&lt;branch&gt;/.git (the only file in that dir)
#   7. Move that .git file into the real working tree (&lt;dir&gt;.wt.tmp)
#   8. Remove the now-empty placeholder directory
#   9. Move the real working tree into place as &lt;dir&gt;/&lt;branch&gt;
#  10. Reset the index to HEAD so git status is clean
#      (--no-checkout leaves the index empty)
#  11. Create &lt;dir&gt;/.git -&gt; bare.git symlink so plain git commands work
#      from the root without --git-dir
#
# The .git file ends up at the same absolute path git recorded in step 5,
# so no worktree repair is needed. Working tree files are never modified.</code></pre></div>
<p>The <code>.git</code> link was added when I noticed that I could run commands that don’t
need the checked out files on the root of the <code>gwt</code> structure, which is handy
sometimes (i.e. a <code>git fetch</code> or a <code>git log</code>, that shows the log of the branch
marked as <code>default</code>).</p>
<p>After playing with commands that used the <code>bare.git</code> folder I updated the <code>init</code>
and <code>convert</code> commands to keep the origin refs, ensuring that the remote
tracking works correctly.</p></section>
<section class="doc-section level-2"><h3 id="_improving_the_add_command">Improving the <code>add</code> command</h3><p>While playing with the tool on more repos I noticed that I also had to enhance
the <code>add</code> command to better handle worktree creation, depending on my needs.</p>
<p>Right now the tool supports the following use cases:</p>
<div class="ulist"><ul><li>if the <code>branch</code> exists locally or on origin, it just checks it out.</li><li>if the <code>branch</code> does not exist, we create it using the given base branch or,
if no base is given, the current <em>worktree</em> (if we are in the root folder or
 <code>bare.git</code> the command fails).</li><li>as I needed it for my project, I added a <code>--orphan</code> option to be able to
create orphan branches directly.</li></ul></div></section>
<section class="doc-section level-2"><h3 id="_moving_to_a_single_file">Moving to a single file</h3><p>Eventually I decided to make the tool self contained; I removed the design
document (I moved the content to comments on the top of the script and details
to comments on each function definition) and added a pair of commands to print
the code to source for the <code>p10k</code> and <code>zsh</code> integration (autocompletion &amp;
functions), leaving everything in a single file.</p>
<p>Now my <code>.zshrc</code> file adds the following to source both things:</p>
<div class="listing-block"><pre class="highlight"><code class="language-shell" data-lang="shell"># After loading the p10k configuration
if type gwt &gt;/dev/null 2&gt;&amp;1; then
  source &lt;(gwt p10k)
fi
[...]
# After loading autocompletion
if type gwt &gt;/dev/null 2&gt;&amp;1; then
  source &lt;(gwt zsh)
fi</code></pre></div></section>
<section class="doc-section level-2"><h3 id="_versioning">Versioning</h3><p>As I modified the script I found interesting to use CalVer-based versioning (the
version variable has the format <code>YYYY.mm.dd-r#</code>) so I added a subcommand to show
its value or bump it using the current date and computing the right revision
number.</p></section>
<section class="doc-section level-2"><h3 id="_about_the_use_of_copilot">About the use of <code>copilot</code></h3><p>Although I’ve never been a fan of AI tools I have to admit that the <code>copilot</code>
CLI has been very useful for building the tool:</p>
<div class="ulist"><ul><li><strong>Rapid prototyping</strong>: Each commit represented a small feature or fix that I
could implement, test immediately in my actual workflow, and iterate on based
on the result</li><li><strong>Edge case handling</strong>: Rather than trying to anticipate every scenario
upfront, I could ask Copilot how to handle edge cases as they appeared in real
usage</li><li><strong>Script refinement</strong>: Questions like &#34;how do I clean up empty directories
after a rename&#34; or &#34;how do I detect if I’m inside a specific worktree&#34; were
quickly answered with working code</li><li><strong>Shell integration</strong>: The Zsh wrapper and completion system grew from simple
prototypes to sophisticated features, with each iteration informed by how I
actually used the tool</li></ul></div>
<p>For example, the <code>convert</code> command started as a simple rename operation, but
evolved to also create a <code>.git</code> symlink and intelligently handle various
migration scenarios—all because I used it repeatedly and refined the
implementation each time.</p></section></section>
<section class="doc-section level-1"><h2 id="_self_contained_and_opinionated">Self-Contained and Opinionated</h2><p><code>gwt</code> is deliberately opinionated:</p>
<div class="ulist"><ul><li><strong>Zsh &amp; Powerlevel10k Integration</strong>: The tool includes built-in Zsh shell
integration, accessed via <code>source &lt;(gwt zsh)</code> and supports adding a prompt
segment when using <code>p10k</code>, as described earlier.</li><li><strong>Directory Structure</strong>: The <code>bare.git</code> directory name is non-negotiable. This
is how <code>gwt</code> discovers the repository root from any subdirectory, and how the
tool knows whether a directory is a gwt repository. The simplicity of this
marker means the discovery mechanism is foolproof and requires no
configuration.</li><li><strong>No Configuration Files</strong>: <code>gwt</code> deliberately has no configuration. There are
no <code>.gwtrc</code> files or config directories. This makes it portable; the tool
works the same way everywhere, and repositories can be shared across systems
without synchronizing configuration.</li></ul></div></section>
<section class="doc-section level-1"><h2 id="_from_script_to_system">From Script to System</h2><p>What started as a small helper script for managing worktrees has become a
complete system:</p>
<div class="olist arabic"><ol class="arabic"><li><strong>Core script</strong> (<code>gwt</code>): 1,111 lines of pure shell, no external dependencies</li><li><strong>Shell integration</strong>: Zsh functions and completions</li><li><strong>Prompt integration</strong>: Powerlevel10k segment</li><li><strong>Documentation</strong>: Built-in help and design philosophy documentation</li></ol></div>
<p>The script is self-contained, everything needed for the tool to work is in a
single file.</p>
<p>This makes it trivial to update (just replace the script) or audit
(no hidden dependencies).</p></section>
<section class="doc-section level-1"><h2 id="_development_with_ai_support">Development with AI support</h2><p>Developing <code>gwt</code> with <code>copilot</code> taught me some things:</p>
<div class="ulist"><ul><li><strong>Incremental refinement works well for small tools</strong>: Each iteration informed
the next, resulting in a tool that handles real use cases elegantly</li><li><strong>Transparency is a feature</strong>: Making operations visible builds confidence and
is easier to debug</li><li><strong>Opinionated tools can be powerful</strong>: By constraining the problem space (one
bare repo, one worktree per branch), the solution becomes simpler and more
robust</li><li><strong>Shell integration matters</strong>: The same core commands are easier to use when
they can automatically change directories and provide completions</li><li><strong>Real-world testing is essential</strong>: I wouldn’t have discovered the need for
automatic directory cleanup or context-aware <code>cd</code> behavior without actually
using the tool daily</li></ul></div></section>
<section class="doc-section level-1"><h2 id="_what_was_next">What was next?</h2><p>The tool is stable and handles my daily workflow well, so my guess is that I
would keep using it and fixing issues if or when I found them, but I do not plan
to include additional features unless I find a use case that justifies it (i.e.
I never added support for some of the <code>worktree</code> subcommands, as it is easier to
use the <code>git</code> versions if I ever needed them).</p></section>
<section class="doc-section level-1"><h2 id="_what_really_happened">What really happened</h2><p>While editing this post I discovered that I needed to add another command to it
and fixed a bug (see below).</p>
<p>With those changes and the inclusion of a license and copyright notice (just in
case I distribute it at some point) now the script is 1,217 lines long instead
of the 1,111 it had when I started to write this entry.</p>
<section class="doc-section level-2"><h3 id="_submodule_support">Submodule Support</h3><p>When I converted this blog repository to the <code>gwt</code> format and tried to preview
the post using <code>docker compose</code>, it failed because the worktree I was on didn’t
have the Git submodule initialized.</p>
<p>My blog theme is included on the repository as a submodule, and when I used
<code>gwt</code> to check out different branches in worktrees, the submodule was not
initialized in the new worktrees.</p>
<p>This led me to add new internal function and a <code>gwt submodule</code> command to
handle submodule initialization; the internal function is called from <code>convert</code>
and <code>add</code> (when converting a repo or adding a worktree) and the public command
is useful to update the submodules on existing branches.</p></section>
<section class="doc-section level-2"><h3 id="_path_handling_with_branch_names_containing_slashes">Path Handling with Branch Names Containing Slashes</h3><p>The second discovery was a bug in how the tool handled branch names containing
slashes (e.g., <code>feature/new-api</code>, <code>docs/user-guide</code>), the worktree directories
are created with the branch name as the path, so a branch like <code>feature/new-api</code>
would create two nested folders (<code>feature</code> and <code>new-api</code> inside it).</p>
<p>However, there was a mismatch in how the <code>zsh</code> wrapper function resolved
worktree paths (initially it used shell parameter expansion, i.e.
<code>rel=&#34;${cwd#&#34;$REPO_ROOT&#34;/}&#34;</code>), versus how the core script calculated them,
causing the <code>cd</code> command to fail or navigate to the wrong location when branch
names contained slashes.</p>
<p>The fix involved ensuring consistent path resolution throughout the script and
wrapper (now it uses a function that processes the <code>git worktree list</code> output),
so that <code>gwt cd feature/new-api</code> correctly navigates to the worktree directory
regardless of path depth.</p></section></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p><code>gwt</code> is a tool that solves a real problem: managing multiple Git branches
simultaneously without context-switching overhead.</p>
<p>I’m sure I’m going to keep using it for my projects, as it simplifies some
workflows, although I’ll also use <code>switch</code> and <code>stash</code> in some cases, but I like
the use of multiple worktrees in parallel.</p>
<p>In fact I converted this blog repository checkout to the <code>gwt</code> format to work on
a separate branch as it felt the right approach even if I’m the only one using
the repo now, and it helped me improve the tool, as explained before.</p>
<p>Also, it was a good example of how to use AI tools like <code>copilot</code> to develop a
simple tool and keep it evolving while using it.</p>
<p>In any case, although I find the <code>copilot</code> useful and has saved me time, I don’t
trust it to work without supervision, it worked well, but got stuck some times
and didn’t do the things as I wanted in multiple occasions.</p>
<p>I also have an additional problem now …​ I’ve been reading about it, but I
don’t really know which models to use or how the premium requests are computed
(I’ve only been playing with it since last month and I ran out of requests the
last day of the month on purpose, just to see what happened …​ it stops working
…​ ;).</p>
<p>On my work machine I’ve been using a specific user account with a <em>GitHub
Copilot Business</em> subscription and I only used the <code>Anthropic Claude Sonnet 4.6</code>
model and with my personal account I configured the <code>Anthropic Claude Haiku 4.5</code>
model, but I’ve only used that to create the initial draft of this post (I ended
up rewriting most of it manually anyway) and to review the final version (I’m
not a native speaker and it was useful for finding typos and improving the style
in some parts).</p>
<p>I guess I’ll try other models with <code>copilot</code> in the future and check other
command line tools like <a href="https://aider.chat/" target="_blank" rel="noopener">aider</a> or
<a href="https://claude.com/product/claude-code" target="_blank" rel="noopener">claude-code</a>, but probably only using
free accounts unless I get a payed account at work, as I have with <em>GitHub
Copilot</em>.</p>
<p>To be fair, what I will love to be able to do is to use local models (<code>aider</code> can
do it), but the machines I have are not powerful enough. I tried to run a simple
test and it felt really slow, but when I have the time or the need I’ll try
again, just in case.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Running dind with sysbox</title>
      <link>https://blogops.mixinet.net/posts/forgejo/running_dind_with_sysbox/</link>
      <pubDate>Tue, 13 May 2025 19:45:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/forgejo/running_dind_with_sysbox/</guid>
      <description>&lt;p&gt;When I configured &lt;code&gt;forgejo-actions&lt;/code&gt; I used a &lt;code&gt;docker-compose.yaml&lt;/code&gt; file to execute the &lt;code&gt;runner&lt;/code&gt; and a &lt;code&gt;dind&lt;/code&gt; container
configured to run using privileged mode to be able to build images with it; as mentioned on my
&lt;a href=&#34;https://blogops.mixinet.net/posts/forgejo/configuring_forgejo_actions/#_starting_it_with_docker_compose&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;post about my
setup&lt;/a&gt;, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the
installation.&lt;/p&gt;
&lt;p&gt;On a work chat the other day someone mentioned that the &lt;strong&gt;GitLab&lt;/strong&gt; documentation about
&lt;a href=&#34;https://docs.gitlab.com/ci/docker/using_kaniko/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;using kaniko&lt;/a&gt; says it is no longer maintained (see the &lt;code&gt;kaniko&lt;/code&gt; issue
&lt;a href=&#34;https://github.com/GoogleContainerTools/kaniko/issues/3348&#34;&gt;#3348&lt;/a&gt;) so we should look into alternatives for kubernetes
clusters.&lt;/p&gt;
&lt;p&gt;I never liked &lt;code&gt;kaniko&lt;/code&gt; too much, but it works without privileged mode and does not need a daemon, which is a good reason
to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use
with my &lt;code&gt;forgejo-actions&lt;/code&gt; setup.&lt;/p&gt;
&lt;p&gt;I was going to try &lt;a href=&#34;https://buildah.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;buildah&lt;/a&gt; and &lt;a href=&#34;https://podman.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;podman&lt;/a&gt; but it seems that they need to adjust
things on the systems running them:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;When I tried to use &lt;code&gt;buildah&lt;/code&gt; inside a &lt;code&gt;docker&lt;/code&gt; container in &lt;strong&gt;Ubuntu&lt;/strong&gt; I found the problems described on the &lt;code&gt;buildah&lt;/code&gt;
issue &lt;a href=&#34;https://github.com/containers/buildah/issues/1901&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;#1901&lt;/a&gt; so I moved on.&lt;/li&gt;&lt;li&gt;Reading the &lt;code&gt;podman&lt;/code&gt; documentation I saw that I need to export the &lt;code&gt;fuse&lt;/code&gt; device to run it inside a container and, as
I found other option, I also skipped it.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>When I configured <code>forgejo-actions</code> I used a <code>docker-compose.yaml</code> file to execute the <code>runner</code> and a <code>dind</code> container
configured to run using privileged mode to be able to build images with it; as mentioned on my
<a href="https://blogops.mixinet.net/posts/forgejo/configuring_forgejo_actions/#_starting_it_with_docker_compose" target="_blank" rel="noopener">post about my
setup</a>, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the
installation.</p>
<p>On a work chat the other day someone mentioned that the <strong>GitLab</strong> documentation about
<a href="https://docs.gitlab.com/ci/docker/using_kaniko/" target="_blank" rel="noopener">using kaniko</a> says it is no longer maintained (see the <code>kaniko</code> issue
<a href="https://github.com/GoogleContainerTools/kaniko/issues/3348">#3348</a>) so we should look into alternatives for kubernetes
clusters.</p>
<p>I never liked <code>kaniko</code> too much, but it works without privileged mode and does not need a daemon, which is a good reason
to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use
with my <code>forgejo-actions</code> setup.</p>
<p>I was going to try <a href="https://buildah.io/" target="_blank" rel="noopener">buildah</a> and <a href="https://podman.io/" target="_blank" rel="noopener">podman</a> but it seems that they need to adjust
things on the systems running them:</p>
<div class="ulist"><ul><li>When I tried to use <code>buildah</code> inside a <code>docker</code> container in <strong>Ubuntu</strong> I found the problems described on the <code>buildah</code>
issue <a href="https://github.com/containers/buildah/issues/1901" target="_blank" rel="noopener">#1901</a> so I moved on.</li><li>Reading the <code>podman</code> documentation I saw that I need to export the <code>fuse</code> device to run it inside a container and, as
I found other option, I also skipped it.</li></ul></div>
<p>As my <code>runner</code> was already configured to use <code>dind</code> I decided to look into <a href="https://github.com/nestybox/sysbox" target="_blank" rel="noopener">sysbox</a>
as a way of removing the <code>privileged</code> flag to make things more secure but have the same functionality.</p>
<section class="doc-section level-1"><h2 id="_installing_the_sysbox_package">Installing the <code>sysbox</code> package</h2><p>As I use <strong>Debian</strong> and <strong>Ubuntu</strong> systems I used the <code>.deb</code> packages distributed from the <code>sysbox</code> release page to install
it (in my case I used the one from the <a href="https://github.com/nestybox/sysbox/releases/tag/v0.6.7" target="_blank" rel="noopener">0.6.7</a> version).</p>
<p>On the machine running <code>forgejo</code> (a Debian 12 server) I downloaded the package, stopped the running containers (it is
needed to install the package and the only ones running where the ones started by the <code>docker-compose.yaml</code> file) and
installed the <code>sysbox-ce_0.6.7.linux_amd64.deb</code> package using <code>dpkg</code>.</p></section>
<section class="doc-section level-1"><h2 id="_updating_the_docker_compose_yaml_file">Updating the <code>docker-compose.yaml</code> file</h2><p>To run the <code>dind</code> container without setting the <code>privileged</code> mode we set <code>sysbox-runc</code> as the <code>runtime</code> on the <code>dind</code>
container definition and set the privileged flag to <code>false</code> (it is the same as removing the key, as it defaults to
<code>false</code>):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="gd">--- a/docker-compose.yml
</span><span class="gi">+++ b/docker-compose.yml
</span><span class="p">@@ -2,7 +2,9 @@</span> services:
   dind:
     image: docker:dind
     container_name: &#39;dind&#39;
<span class="gd">-    privileged: &#39;true&#39;
</span><span class="gi">+    # use sysbox-runc instead of using privileged mode
+    runtime: &#39;sysbox-runc&#39;
+    privileged: &#39;false&#39;
</span>     command: [&#39;dockerd&#39;, &#39;-H&#39;, &#39;unix:///dind/docker.sock&#39;, &#39;-G&#39;, &#39;$RUNNER_GID&#39;]
     restart: &#39;unless-stopped&#39;
     volumes:</code></pre></div></section>
<section class="doc-section level-1"><h2 id="_testing_the_changes">Testing the changes</h2><p>After applying the changes to the <code>docker-compose.yaml</code> file we start the containers and to test things we re-run
previously executed jobs to see if things work as before.</p>
<p>In my case I re-executed the <code>build-image-from-tag</code> workflow
<a href="https://forgejo.mixinet.net/oci/images/actions/runs/18" target="_blank" rel="noopener">#18</a> from the <code>oci</code> project and everything worked as expected.</p></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>For my current use case (<code>docker</code> + <code>dind</code>) seems that <code>sysbox</code> is a good solution but I’m not sure if I’ll be
<a href="https://github.com/nestybox/sysbox/blob/master/docs/user-guide/install-k8s.md" target="_blank" rel="noopener">installing it on kubernetes</a> anytime soon
unless I find a valid reason to do it (last time we talked about it my co workers said that they are evaluating
<code>buildah</code> and <code>podman</code> for kubernetes and probably we will use them to replace <code>kaniko</code> in our <code>gitlab-ci</code> pipelines and
for those tools the use of <code>sysbox</code> seems an overkill).</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Playing with vCluster</title>
      <link>https://blogops.mixinet.net/posts/k8s/playing_with_vcluster/</link>
      <pubDate>Mon, 12 May 2025 13:00:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/k8s/playing_with_vcluster/</guid>
      <description>&lt;p&gt;After my previous posts related to &lt;a href=&#34;https://argo-cd.readthedocs.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Argo CD&lt;/a&gt; (one about
&lt;a href=&#34;https://blogops.mixinet.net/posts/gitops/argocd-autopilot/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;argocd-autopilot&lt;/a&gt; and another with some
&lt;a href=&#34;https://blogops.mixinet.net/posts/gitops/argocd_usage_examples/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;usage examples&lt;/a&gt;) I started to look into
&lt;a href=&#34;https://kluctl.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Kluctl&lt;/a&gt; (I also plan to review &lt;a href=&#34;https://fluxcd.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Flux&lt;/a&gt;, but I’m more interested on the &lt;code&gt;kluctl&lt;/code&gt;
approach right now).&lt;/p&gt;
&lt;p&gt;While reading an entry on the project blog about &lt;a href=&#34;https://kluctl.io/blog/2024/03/13/cluster-api-kluctl/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Cluster API&lt;/a&gt;
somehow I ended up on the &lt;a href=&#34;https://www.vcluster.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;vCluster&lt;/a&gt; site and decided to give it a try, as it can be a valid
way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common
clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any
given time.&lt;/p&gt;
&lt;p&gt;On this post I will deploy a &lt;code&gt;vcluster&lt;/code&gt; using the &lt;code&gt;k3d_argocd&lt;/code&gt; kubernetes cluster (the one we created on the posts about
&lt;code&gt;argocd&lt;/code&gt;) as the host and will show how to:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;use its ingress (in our case &lt;code&gt;traefik&lt;/code&gt;) to access the API of the virtual one (removes the need of having to use the
&lt;code&gt;vcluster connect&lt;/code&gt; command to access it with &lt;code&gt;kubectl&lt;/code&gt;),&lt;/li&gt;&lt;li&gt;publish the ingress objects deployed on the virtual cluster on the host ingress, and&lt;/li&gt;&lt;li&gt;use the &lt;code&gt;sealed-secrets&lt;/code&gt; of the host cluster to manage the virtual cluster secrets.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>After my previous posts related to <a href="https://argo-cd.readthedocs.io/" target="_blank" rel="noopener">Argo CD</a> (one about
<a href="https://blogops.mixinet.net/posts/gitops/argocd-autopilot/" target="_blank" rel="noopener">argocd-autopilot</a> and another with some
<a href="https://blogops.mixinet.net/posts/gitops/argocd_usage_examples/" target="_blank" rel="noopener">usage examples</a>) I started to look into
<a href="https://kluctl.io/" target="_blank" rel="noopener">Kluctl</a> (I also plan to review <a href="https://fluxcd.io/" target="_blank" rel="noopener">Flux</a>, but I’m more interested on the <code>kluctl</code>
approach right now).</p>
<p>While reading an entry on the project blog about <a href="https://kluctl.io/blog/2024/03/13/cluster-api-kluctl/" target="_blank" rel="noopener">Cluster API</a>
somehow I ended up on the <a href="https://www.vcluster.com/" target="_blank" rel="noopener">vCluster</a> site and decided to give it a try, as it can be a valid
way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common
clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any
given time.</p>
<p>On this post I will deploy a <code>vcluster</code> using the <code>k3d_argocd</code> kubernetes cluster (the one we created on the posts about
<code>argocd</code>) as the host and will show how to:</p>
<div class="ulist"><ul><li>use its ingress (in our case <code>traefik</code>) to access the API of the virtual one (removes the need of having to use the
<code>vcluster connect</code> command to access it with <code>kubectl</code>),</li><li>publish the ingress objects deployed on the virtual cluster on the host ingress, and</li><li>use the <code>sealed-secrets</code> of the host cluster to manage the virtual cluster secrets.</li></ul></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>On this post we will use the <code>k3d</code> cluster used to work with <code>argocd</code>, but it is about playing with <code>vcluster</code>, so I’ll
be deploying things using <code>kubectl</code> directly.</p>
<p>In any case, if wanted things could be done using the GitOps approach once the <code>vcluster</code> is available.</p></aside>
<section class="doc-section level-1"><h2 id="_creating_the_virtual_cluster">Creating the virtual cluster</h2><section class="doc-section level-2"><h3 id="_installing_the_vcluster_application">Installing the <code>vcluster</code> application</h3><p>To create the virtual clusters we need the <code>vcluster</code> command, we can install it with <code>arkade</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ arkade get vcluster</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_the_vcluster_yaml_file">The <code>vcluster.yaml</code> file</h3><p>To create the cluster we are going to use the following <code>vcluster.yaml</code> file (you can find the documentation about all
its options <a href="https://www.vcluster.com/docs/vcluster/configure/vcluster-yaml/" target="_blank" rel="noopener">here</a>):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">controlPlane</span><span class="pi">:</span>
  <span class="na">proxy</span><span class="pi">:</span>
    <span class="c1"># Extra hostnames to sign the vCluster proxy certificate for</span>
    <span class="na">extraSANs</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">my-vcluster-api.lo.mixinet.net</span>
<span class="na">exportKubeConfig</span><span class="pi">:</span>
  <span class="na">context</span><span class="pi">:</span> <span class="s">my-vcluster_k3d-argocd</span>
  <span class="na">server</span><span class="pi">:</span> <span class="s">https://my-vcluster-api.lo.mixinet.net:8443</span>
  <span class="na">secret</span><span class="pi">:</span>
    <span class="na">name</span><span class="pi">:</span> <span class="s">my-vcluster-kubeconfig</span>
<span class="na">sync</span><span class="pi">:</span>
  <span class="na">toHost</span><span class="pi">:</span>
    <span class="na">ingresses</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">serviceAccounts</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">fromHost</span><span class="pi">:</span>
    <span class="na">ingressClasses</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">nodes</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="kc">true</span>
      <span class="na">clearImageStatus</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">secrets</span><span class="pi">:</span>
      <span class="na">enabled</span><span class="pi">:</span> <span class="kc">true</span>
      <span class="na">mappings</span><span class="pi">:</span>
        <span class="na">byName</span><span class="pi">:</span>
          <span class="c1"># sync all Secrets from the &#39;my-vcluster-default&#39; namespace to the</span>
          <span class="c1"># virtual &#34;default&#34; namespace.</span>
          <span class="s2">&#34;</span><span class="s">my-vcluster-default/*&#34;</span><span class="err">:</span> <span class="s2">&#34;</span><span class="s">default/*&#34;</span>
          <span class="c1"># We could add other namespace mappings if needed, i.e.:</span>
          <span class="c1"># &#34;my-vcluster-kube-system/*&#34;: &#34;kube-system/*&#34;</span></code></pre></div>
<p>On the <code>controlPlane</code> section we’ve added the <code>proxy.extraSANs</code> entry to add an extra host name to make sure it is added
to the cluster certificates if we use it from an ingress.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>It is possible to create the <code>ingress</code> under the <code>controlPlane</code> key as described in the
<a href="https://www.vcluster.com/docs/vcluster/configure/vcluster-yaml/control-plane/deployment/ingress" target="_blank" rel="noopener">documentation</a>, but the
generated object uses annotations for the <code>nginx-ingress</code> controller to use the TLS certificates of the original
service that don’t work for us (we are using <code>traefik</code> as our ingress controller).</p>
<p>To solve the issue we will create an
<a href="https://doc.traefik.io/traefik/reference/routing-configuration/kubernetes/crd/tcp/ingressroutetcp/" target="_blank" rel="noopener">IngressRouteTCP</a>
object manually after deploying the virtual cluster.</p></aside>
<p>The <code>exportKubeConfig</code> section creates a <code>kubeconfig</code> secret on the virtual cluster <code>namespace</code> using the provided
host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.</p>
<p>On the <code>sync</code> section we enable the synchronization of <code>Ingress</code> objects and <code>ServiceAccounts</code> from the virtual to the
host cluster:</p>
<div class="ulist"><ul><li>We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside
world.</li><li>The service account synchronization is not really needed, but we enable it because if we test this configuration with
EKS it would be useful if we use IAM roles for the service accounts.</li></ul></div>
<p>On the opposite direction (from the host to the virtual cluster) we synchronize:</p>
<div class="ulist"><ul><li>The <code>IngressClass</code> objects, to be able to use the host ingress server(s).</li><li>The <code>Nodes</code> (we are not using the info right now, but it could be interesting if we want to have the real information
of the nodes running pods of the virtual cluster).</li><li>The <code>Secrets</code> from the <code>my-vcluster-default</code> host <code>namespace</code> to the <code>default</code> of the virtual cluster; that
synchronization allows us to deploy <code>SealedSecrets</code> on the host that generate secrets that are copied
automatically to the virtual one. Initially we only copy secrets for one <code>namespace</code> but if the virtual cluster needs
others we can add <code>namespaces</code> on the host and their mappings to the virtual one on the <code>vcluster.yaml</code> file.</li></ul></div></section>
<section class="doc-section level-2"><h3 id="_creating_the_virtual_cluster_2">Creating the virtual cluster</h3><p>To create the virtual cluster we run the following command:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">vcluster create my-vcluster <span class="nt">--namespace</span> my-vcluster <span class="nt">--upgrade</span> <span class="nt">--connect</span><span class="o">=</span><span class="nb">false</span> <span class="se">\</span>
  <span class="nt">--values</span> vcluster.yaml</code></pre></div>
<p>It creates the virtual cluster on the <code>my-vcluster</code> namespace using the <code>vcluster.yaml</code> file shown before without
connecting to the cluster from our local machine (if we don’t pass that option the command adds an entry on our
kubeconfig and launches a proxy to connect to the virtual cluster that we don’t plan to use).</p></section>
<section class="doc-section level-2"><h3 id="_adding_an_ingress_tcp_route_to_connect_to_the_vcluster_api">Adding an ingress TCP route to connect to the vcluster api</h3><p>As explained before, we need to create an <code>IngressTcpRoute</code> object to be able to connect to the vcluster API, we use the
following definition:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">traefik.containo.us/v1alpha1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">IngressRouteTCP</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">my-vcluster-api</span>
  <span class="na">namespace</span><span class="pi">:</span> <span class="s">my-vcluster</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">entryPoints</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">websecure</span>
  <span class="na">routes</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="na">match</span><span class="pi">:</span> <span class="s">HostSNI(`my-vcluster-api.lo.mixinet.net`)</span>
      <span class="na">services</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">my-vcluster</span>
          <span class="na">port</span><span class="pi">:</span> <span class="m">443</span>
  <span class="na">tls</span><span class="pi">:</span>
    <span class="na">passthrough</span><span class="pi">:</span> <span class="kc">true</span></code></pre></div>
<p>Once we apply those changes the cluster API will be available on the <a class="bare" href="https://my-cluster-api.lo.mixinet.net:8443">https://my-cluster-api.lo.mixinet.net:8443</a> URL
using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted
it on the <code>vcluster.yaml</code> file, as explained before).</p></section>
<section class="doc-section level-2"><h3 id="_getting_the_kubeconfig_for_the_vcluster">Getting the kubeconfig for the vcluster</h3><p>Once the vcluster is running we will have its kubeconfig available on the <code>my-vcluster-kubeconfig</code> secret on its
namespace on the host cluster.</p>
<p>To dump it to the <code>~/.kube/my-vcluster-config</code> we can do the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
</span><span class="gp">    --template=&#34;{{.data.config}}&#34; | base64 -d &gt;</span><span class="w"> </span>~/.kube/my-vcluster-config</code></pre></div>
<p>Once available we can define the <code>vkubectl</code> alias to adjust the <code>KUBECONFIG</code> variable to access it:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">alias </span><span class="nv">vkubectl</span><span class="o">=</span><span class="s2">&#34;KUBECONFIG=~/.kube/my-vcluster-config kubectl&#34;</span></code></pre></div>
<p>Or we can merge the configuration with the one on the KUBECONFIG variable and use <code>kubectx</code> or a similar tool to change
the context (for our vcluster the context will be <code>my-vcluster_k3d-argocd</code>). If the <code>KUBECONFIG</code> variable is defined and
only has the PATH to a single file the merge can be done running the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">KUBECONFIG</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$KUBECONFIG</span><span class="s2">:~/.kube/my-vcluster-config&#34;</span> kubectl config view <span class="se">\</span>
  <span class="nt">--flatten</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$KUBECONFIG</span><span class="s2">.new&#34;</span>
<span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$KUBECONFIG</span><span class="s2">.new&#34;</span> <span class="s2">&#34;</span><span class="nv">$KUBECONFIG</span><span class="s2">&#34;</span></code></pre></div>
<p>On the rest of this post we will use the <code>vkubectl</code> alias when connecting to the virtual cluster, i.e. to check that it
works we can run the <code>cluster-info</code> subcommand:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use &#39;kubectl cluster-info dump&#39;.</span></code></pre></div></section></section>
<section class="doc-section level-1"><h2 id="_installing_the_dummyhttpd_application">Installing the <code>dummyhttpd</code> application</h2><p>To test the virtual cluster we are going to install the <code>dummyhttpd</code> application using the following
<code>kustomization.yaml</code> file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">resources</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0</span>
<span class="c1"># Add the config map</span>
<span class="na">configMapGenerator</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp-configmap</span>
    <span class="na">literals</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">CM_VAR=&#34;Vcluster Test Value&#34;</span>
    <span class="na">behavior</span><span class="pi">:</span> <span class="s">create</span>
    <span class="na">options</span><span class="pi">:</span>
      <span class="na">disableNameSuffixHash</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">patches</span><span class="pi">:</span>
  <span class="c1"># Change the ingress host name</span>
  <span class="pi">-</span> <span class="na">target</span><span class="pi">:</span>
      <span class="na">kind</span><span class="pi">:</span> <span class="s">Ingress</span>
      <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
    <span class="na">patch</span><span class="pi">:</span> <span class="pi">|-</span>
      <span class="s">- op: replace</span>
        <span class="s">path: /spec/rules/0/host</span>
        <span class="s">value: vcluster-dummyhttp.lo.mixinet.net</span>
  <span class="c1"># Add reloader annotations -- it will only work if we install reloader on the</span>
  <span class="c1"># virtual cluster, as the one on the host cluster doesn&#39;t see the vcluster</span>
  <span class="c1"># deployment objects</span>
  <span class="pi">-</span> <span class="na">target</span><span class="pi">:</span>
      <span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
      <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
    <span class="na">patch</span><span class="pi">:</span> <span class="pi">|-</span>
      <span class="s">- op: add</span>
        <span class="s">path: /metadata/annotations</span>
        <span class="s">value:</span>
          <span class="s">reloader.stakater.com/auto: &#34;true&#34;</span>
          <span class="s">reloader.stakater.com/rollout-strategy: &#34;restart&#34;</span></code></pre></div>
<p>It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run
<code>kustomize</code> and <code>vkubectl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">❯ kustomize build <span class="nb">.</span> | vkubectl apply <span class="nt">-f</span> -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created</code></pre></div>
<p>We can check that everything worked using <code>curl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">❯ curl <span class="nt">-s</span> https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq <span class="nt">-cM</span> <span class="nb">.</span>
<span class="o">{</span><span class="s2">&#34;c&#34;</span>: <span class="s2">&#34;Vcluster Test Value&#34;</span>,<span class="s2">&#34;s&#34;</span>: <span class="s2">&#34;&#34;</span><span class="o">}</span></code></pre></div>
<p>The objects available on the <code>vcluster</code> now are:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vkubectl get all,configmap,ingress
NAME                             READY   STATUS    RESTARTS   AGE
pod/dummyhttp-55569589bc-9zl7t   1/1     Running   0          24s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
</span><span class="gp">service/dummyhttp    ClusterIP   10.43.51.39    &lt;none&gt;</span><span class="w">        </span>80/TCP    24s
<span class="gp">service/kubernetes   ClusterIP   10.43.153.12   &lt;none&gt;</span><span class="w">        </span>443/TCP   14m
<span class="go">
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dummyhttp   1/1     1            1           24s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dummyhttp-55569589bc   1         1         1       24s

NAME                            DATA   AGE
configmap/dummyhttp-configmap   1      24s
configmap/kube-root-ca.crt      1      14m

NAME                                CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    24s</span></code></pre></div>
<p>While we have the following ones on the <code>my-vcluster</code> namespace of the host cluster:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl get all,configmap,ingress -n my-vcluster
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster   1/1     Running   0          18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster    1/1     Running   0          45s
pod/my-vcluster-0                                         1/1     Running   0          19m

NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
</span><span class="gp">service/dummyhttp-x-default-x-my-vcluster      ClusterIP   10.43.51.39     &lt;none&gt;</span><span class="w">        </span>80/TCP                   45s
<span class="gp">service/kube-dns-x-kube-system-x-my-vcluster   ClusterIP   10.43.91.198    &lt;none&gt;</span><span class="w">        </span>53/UDP,53/TCP,9153/TCP   18m
<span class="gp">service/my-vcluster                            ClusterIP   10.43.153.12    &lt;none&gt;</span><span class="w">        </span>443/TCP,10250/TCP        19m
<span class="gp">service/my-vcluster-headless                   ClusterIP   None            &lt;none&gt;</span><span class="w">        </span>443/TCP                  19m
<span class="gp">service/my-vcluster-node-k3d-argocd-agent-1    ClusterIP   10.43.189.188   &lt;none&gt;</span><span class="w">        </span>10250/TCP                18m
<span class="go">
NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     19m

NAME                                                     DATA   AGE
configmap/coredns-x-kube-system-x-my-vcluster            2      18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster    1      45s
configmap/kube-root-ca.crt                               1      19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster       1      11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster   1      18m
configmap/vc-coredns-my-vcluster                         1      19m

NAME                                                        CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    45s</span></code></pre></div>
<p>As shown, we have copies of the <code>Service</code>, <code>Pod</code>, <code>Configmap</code> and <code>Ingress</code> objects, but there is no copy of the
<code>Deployment</code> or <code>ReplicaSet</code>.</p>
<section class="doc-section level-2"><h3 id="_creating_a_sealed_secret_for_dummyhttpd">Creating a sealed secret for <code>dummyhttpd</code></h3><p>To use the hosts sealed secrets controller with the virtual cluster we will create the <code>my-vcluster-default</code> namespace
and add there the sealed secrets we want to have available as secrets on the <code>default</code> namespace of the virtual cluster:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl create namespace my-vcluster-default
❯ echo -n &#34;Vcluster Boo&#34; | kubectl create secret generic &#34;dummyhttp-secret&#34; \
    --namespace &#34;my-vcluster-default&#34; --dry-run=client \
</span><span class="gp">    --from-file=SECRET_VAR=/dev/stdin -o yaml &gt;</span>dummyhttp-secret.yaml
<span class="go">❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml</span></code></pre></div>
<p>After running the previous commands we have the following objects available on the host cluster:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     34s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      34s</span></code></pre></div>
<p>And we can see that the secret is also available on the virtual cluster with the content we expected:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vkubectl get secrets
NAME               TYPE     DATA   AGE
dummyhttp-secret   Opaque   1      34s
❯ vkubectl get secret/dummyhttp-secret --template=&#34;{{.data.SECRET_VAR}}&#34; \
  | base64 -d
Vcluster Boo</span></code></pre></div>
<p>But the output of the <code>curl</code> command has not changed because, although we have the <code>reloader</code> controller deployed on the
host cluster, it does not see the <code>Deployment</code> object of the virtual one and the pods are not touched:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">❯ curl <span class="nt">-s</span> https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq <span class="nt">-cM</span> <span class="nb">.</span>
<span class="o">{</span><span class="s2">&#34;c&#34;</span>: <span class="s2">&#34;Vcluster Test Value&#34;</span>,<span class="s2">&#34;s&#34;</span>: <span class="s2">&#34;&#34;</span><span class="o">}</span></code></pre></div></section></section>
<section class="doc-section level-1"><h2 id="_installing_the_reloader_application">Installing the <code>reloader</code> application</h2><p>To make <code>reloader</code> work on the virtual cluster we just need to install it as we did on the host using the following
<code>kustomization.yaml</code> file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">namespace</span><span class="pi">:</span> <span class="s">kube-system</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2</span>
<span class="na">patches</span><span class="pi">:</span>
<span class="c1"># Add flags to reload workloads when ConfigMaps or Secrets are created or deleted</span>
<span class="pi">-</span> <span class="na">target</span><span class="pi">:</span>
    <span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
    <span class="na">name</span><span class="pi">:</span> <span class="s">reloader-reloader</span>
  <span class="na">patch</span><span class="pi">:</span> <span class="pi">|-</span>
    <span class="s">- op: add</span>
      <span class="s">path: /spec/template/spec/containers/0/args</span>
      <span class="s">value:</span>
        <span class="s">- &#39;--reload-on-create=true&#39;</span>
        <span class="s">- &#39;--reload-on-delete=true&#39;</span>
        <span class="s">- &#39;--reload-strategy=annotations&#39;</span></code></pre></div>
<p>We deploy it with <code>kustomize</code> and <code>vkubectl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">❯ kustomize build <span class="nb">.</span> | vkubectl apply <span class="nt">-f</span> -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created</code></pre></div>
<p>As the controller was not available when the secret was created the pods linked to the <code>Deployment</code> are not updated, but
we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed
version and copied to the virtual cluster where the reloader controller updates the pod and the <code>curl</code> command shows the
new output:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret &#34;dummyhttp-secret&#34; deleted
❯ sleep 2
❯ vkubectl get pods
NAME                         READY   STATUS        RESTARTS   AGE
dummyhttp-78bf5fb885-fmsvs   1/1     Terminating   0          6m33s
dummyhttp-c68684bbf-nx8f9    1/1     Running       0          6s
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{&#34;c&#34;:&#34;Vcluster Test Value&#34;,&#34;s&#34;:&#34;Vcluster Boo&#34;}</span></code></pre></div>
<p>If we change the secret on the host systems things get updated pretty quickly now:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ echo -n &#34;New secret&#34; | kubectl create secret generic &#34;dummyhttp-secret&#34; \
    --namespace &#34;my-vcluster-default&#34; --dry-run=client \
</span><span class="gp">    --from-file=SECRET_VAR=/dev/stdin -o yaml &gt;</span>dummyhttp-secret.yaml
<span class="go">❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{&#34;c&#34;:&#34;Vcluster Test Value&#34;,&#34;s&#34;:&#34;New secret&#34;}</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_pause_and_restore_the_vcluster">Pause and restore the vcluster</h2><p>The status of pods and statefulsets while the virtual cluster is active can be seen using <code>kubectl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl get pods,statefulsets -n my-vcluster
NAME                                                                 READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster              1/1     Running   0          127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster               1/1     Running   0          4m39s
pod/my-vcluster-0                                                    1/1     Running   0          128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster   1/1     Running   0          60m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     128m</span></code></pre></div>
<section class="doc-section level-2"><h3 id="_pausing_the_vcluster">Pausing the vcluster</h3><p>If we don’t need to use the virtual cluster we can pause it and after a small amount of time all <code>Pods</code> are gone because
the <code>statefulSet</code> is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that
have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on
clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that
now can be used for other things):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
❯ kubectl get pods,statefulsets -n my-vcluster
NAME                           READY   AGE
statefulset.apps/my-vcluster   0/0     130m</span></code></pre></div>
<p>Now the <code>curl</code> command fails:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found</span></code></pre></div>
<p>Although the ingress is still available (it returns a <code>404</code> because there is no pod behind the service):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl get ingress -n my-vcluster
NAME                                CLASS     HOSTS                               ADDRESS                            PORTS   AGE
dummyhttp-x-default-x-my-vcluster   traefik   vcluster-dummyhttp.lo.mixinet.net   172.20.0.2,172.20.0.3,172.20.0.4   80      120m</span></code></pre></div>
<p>In fact, the same problem happens when we try to connect to the <code>vcluster</code> API; the error shown by <code>kubectl</code> is related
to the TLS certificate because the <code>404</code> page uses the wildcard certificate instead of the self signed one:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
❯ curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
</span><span class="gp">❯ curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2&gt;</span>&amp;1 | <span class="nb">grep </span>subject
<span class="go">*  subject: CN=lo.mixinet.net
*  subjectAltName: host &#34;my-vcluster-api.lo.mixinet.net&#34; matched cert&#39;s &#34;*.lo.mixinet.net&#34;</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_resuming_the_vcluster">Resuming the vcluster</h3><p>When we want to use the virtual cluster again we just need to use the <code>resume</code> command:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vcluster</span></code></pre></div>
<p>Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of
course.</p></section></section>
<section class="doc-section level-1"><h2 id="_cleaning_up">Cleaning up</h2><p>The virtual cluster can be removed using the <code>delete</code> command:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted</span></code></pre></div>
<p>That removes everything we used on this post except the sealed secrets and secrets that we put on the
<code>my-vcluster-default</code> namespace because it was created by us.</p>
<p>If we delete the namespace all the secrets and sealed secrets on it are also removed:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl delete namespace my-vcluster-default
namespace &#34;my-vcluster-default&#34; deleted</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_conclusions">Conclusions</h2><p>I believe that the use of virtual clusters can be a good option for two of the proposed use cases that I’ve encountered
in real projects in the past:</p>
<div class="ulist"><ul><li>need of short lived clusters for developers or teams,</li><li>execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual
clusters that are created on demand or paused and resumed when needed).</li></ul></div>
<p>For both cases things can be set up using the Apache licensed product, although maybe evaluating the <strong>vCluster Platform</strong>
offering could be interesting.</p>
<p>In any case when everything is not done inside kubernetes we will also have to check how to manage the external services
(i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way
of creating, deleting or pause and resume those services).</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Argo CD Usage Examples</title>
      <link>https://blogops.mixinet.net/posts/gitops/argocd_usage_examples/</link>
      <pubDate>Mon, 05 May 2025 07:50:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/gitops/argocd_usage_examples/</guid>
      <description>&lt;p&gt;As a followup of my post about the use of &lt;a href=&#34;https://blogops.mixinet.net/posts/gitops/argocd-autopilot/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;argocd-autopilot&lt;/a&gt;
I’m going to deploy various applications to the cluster using &lt;a href=&#34;https://argo-cd.readthedocs.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Argo CD&lt;/a&gt; from the same
repository we used on the previous post.&lt;/p&gt;
&lt;p&gt;For our examples we are going to test a solution to the problem we had when we updated a &lt;code&gt;ConfigMap&lt;/code&gt; used by the
&lt;code&gt;argocd-server&lt;/code&gt; (the resource was updated but the application Pod was not because there was no change on the
&lt;code&gt;argocd-server&lt;/code&gt; deployment); our original fix was to kill the pod manually, but the manual operation is something we
want to avoid.&lt;/p&gt;
&lt;p&gt;The proposed solution to this kind of issues on the
&lt;a href=&#34;https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;helm documentation&lt;/a&gt; is to add
annotations to the &lt;code&gt;Deployments&lt;/code&gt; with values that are a hash of the &lt;code&gt;ConfigMaps&lt;/code&gt; or &lt;code&gt;Secrets&lt;/code&gt; used by them, this way if
a file is updated the annotation is also updated and when the &lt;code&gt;Deployment&lt;/code&gt; changes are applied a roll out of the pods is
triggered.&lt;/p&gt;
&lt;p&gt;On this post we will install a couple of controllers and an application to show how we can handle &lt;code&gt;Secrets&lt;/code&gt; with
&lt;code&gt;argocd&lt;/code&gt; and solve the issue with updates on &lt;code&gt;ConfigMaps&lt;/code&gt; and &lt;code&gt;Secrets&lt;/code&gt;, to do it we will execute the following tasks:&lt;/p&gt;
&lt;div class=&#34;olist arabic&#34;&gt;&lt;ol class=&#34;arabic&#34;&gt;&lt;li&gt;Deploy the &lt;a href=&#34;https://github.com/stakater/Reloader&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Reloader&lt;/a&gt; controller to our cluster. It is a tool that watches
changes in &lt;code&gt;ConfigMaps&lt;/code&gt; and &lt;code&gt;Secrets&lt;/code&gt; and does rolling upgrades on the &lt;code&gt;Pods&lt;/code&gt; that use them from &lt;code&gt;Deployment&lt;/code&gt;,
&lt;code&gt;StatefulSet&lt;/code&gt;, &lt;code&gt;DaemonSet&lt;/code&gt; or &lt;code&gt;DeploymentConfig&lt;/code&gt; objects when they are updated (by default we have to add some
annotations to the objects to make things work).&lt;/li&gt;&lt;li&gt;Deploy a simple application that can use &lt;code&gt;ConfigMaps&lt;/code&gt; and &lt;code&gt;Secrets&lt;/code&gt; and test that the &lt;code&gt;Reloader&lt;/code&gt; controller does its
job when we add or update a &lt;code&gt;ConfigMap&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;Install the &lt;a href=&#34;https://github.com/bitnami-labs/sealed-secrets&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Sealed Secrets&lt;/a&gt; controller to manage secrets inside our
cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.&lt;/li&gt;&lt;/ol&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>As a followup of my post about the use of <a href="https://blogops.mixinet.net/posts/gitops/argocd-autopilot/" target="_blank" rel="noopener">argocd-autopilot</a>
I’m going to deploy various applications to the cluster using <a href="https://argo-cd.readthedocs.io/" target="_blank" rel="noopener">Argo CD</a> from the same
repository we used on the previous post.</p>
<p>For our examples we are going to test a solution to the problem we had when we updated a <code>ConfigMap</code> used by the
<code>argocd-server</code> (the resource was updated but the application Pod was not because there was no change on the
<code>argocd-server</code> deployment); our original fix was to kill the pod manually, but the manual operation is something we
want to avoid.</p>
<p>The proposed solution to this kind of issues on the
<a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments" target="_blank" rel="noopener">helm documentation</a> is to add
annotations to the <code>Deployments</code> with values that are a hash of the <code>ConfigMaps</code> or <code>Secrets</code> used by them, this way if
a file is updated the annotation is also updated and when the <code>Deployment</code> changes are applied a roll out of the pods is
triggered.</p>
<p>On this post we will install a couple of controllers and an application to show how we can handle <code>Secrets</code> with
<code>argocd</code> and solve the issue with updates on <code>ConfigMaps</code> and <code>Secrets</code>, to do it we will execute the following tasks:</p>
<div class="olist arabic"><ol class="arabic"><li>Deploy the <a href="https://github.com/stakater/Reloader" target="_blank" rel="noopener">Reloader</a> controller to our cluster. It is a tool that watches
changes in <code>ConfigMaps</code> and <code>Secrets</code> and does rolling upgrades on the <code>Pods</code> that use them from <code>Deployment</code>,
<code>StatefulSet</code>, <code>DaemonSet</code> or <code>DeploymentConfig</code> objects when they are updated (by default we have to add some
annotations to the objects to make things work).</li><li>Deploy a simple application that can use <code>ConfigMaps</code> and <code>Secrets</code> and test that the <code>Reloader</code> controller does its
job when we add or update a <code>ConfigMap</code>.</li><li>Install the <a href="https://github.com/bitnami-labs/sealed-secrets" target="_blank" rel="noopener">Sealed Secrets</a> controller to manage secrets inside our
cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.</li></ol></div>
<section class="doc-section level-1"><h2 id="_creating_the_test_project_for_argocd_autopilot">Creating the <code>test</code> project for <code>argocd-autopilot</code></h2><p>As we did our installation using <code>argocd-autopilot</code> we will use its structure to manage the applications.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>For all examples we are going to assume that the <code>GIT_REPO</code> and <code>GIT_TOKEN</code> variables are defined:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">export </span><span class="nv">GIT_REPO</span><span class="o">=</span><span class="s2">&#34;https://forgejo.mixinet.net/blogops/argocd.git&#34;</span><span class="p">;</span>
<span class="nb">export </span><span class="nv">GIT_TOKEN</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>pass mixinet.net/argocd@forgejo/repository-write<span class="si">)</span><span class="s2">&#34;</span><span class="p">;</span></code></pre></div>
<p>And the PATH includes the <code>arkade</code> installation directory:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="k">case</span> <span class="s2">&#34;:</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:&#34;</span> <span class="k">in</span>
  <span class="k">*</span>:<span class="s2">&#34;</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span>:<span class="k">*</span><span class="p">)</span> <span class="p">;;</span>
  <span class="k">*</span><span class="p">)</span> <span class="nb">export </span><span class="nv">PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span> <span class="p">;;</span>
<span class="k">esac</span></code></pre></div></aside>
<p>The first thing to do is to create a project (we will name it <code>test</code>) as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: &#34;&#34;, installation path: &#34;/&#34;
INFO pushing new project manifest to repo
INFO project created: &#39;test&#39;</span></code></pre></div>
<p>Now that the <code>test</code> project is available we will use it on our <code>argocd-autopilot</code> invocations when creating
applications.</p></section>
<section class="doc-section level-1"><h2 id="_installing_the_reloader_controller">Installing the <code>reloader</code> controller</h2><p>To add the <code>reloader</code> application to the <code>test</code> project as a <code>kustomize</code> application and deploy it on the <code>tools</code>
namespace with <code>argocd-autopilot</code> we do the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ argocd-autopilot app create reloader \
    --app &#39;github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2&#39; \
    --project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: &#34;&#34;, installation path: &#34;/&#34;
INFO created &#39;application namespace&#39; file at &#39;/bootstrap/cluster-resources/in-cluster/tools-ns.yaml&#39;
INFO committing changes to gitops repo...
INFO installed application: reloader</span></code></pre></div>
<p>That command creates four files on the <code>argocd</code> repository:</p>
<div class="olist arabic"><ol class="arabic"><li><p>One to create the <code>tools</code> namespace:</p><figure class="listing-block"><figcaption><code>bootstrap/cluster-resources/in-cluster/tools-ns.yaml</code></figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Namespace</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">annotations</span><span class="pi">:</span>
    <span class="na">argocd.argoproj.io/sync-options</span><span class="pi">:</span> <span class="s">Prune=false</span>
  <span class="na">creationTimestamp</span><span class="pi">:</span> <span class="kc">null</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">tools</span>
<span class="na">spec</span><span class="pi">:</span> <span class="pi">{}</span>
<span class="na">status</span><span class="pi">:</span> <span class="pi">{}</span></code></pre></figure></li><li><p>Another to include the <code>reloader</code> base application from the upstream repository:</p><figure class="listing-block"><figcaption><code>apps/reloader/base/kustomization.yaml</code></figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2</span></code></pre></figure></li><li><p>The <code>kustomization.yaml</code> file for the <code>test</code> project (by default it includes the same configuration used on the
<code>base</code> definition, but we could make other changes if needed):</p><figure class="listing-block"><figcaption><code>apps/reloader/overlays/test/kustomization.yaml</code></figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">namespace</span><span class="pi">:</span> <span class="s">tools</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">../../base</span></code></pre></figure></li><li><p>The <code>config.json</code> file used to define the application on <code>argocd</code> for the <code>test</code> project (it points to the folder
that includes the previous <code>kustomization.yaml</code> file):</p><figure class="listing-block"><figcaption><code>apps/reloader/overlays/test/config.json</code></figcaption>
<pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;appName&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;reloader&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;userGivenName&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;reloader&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;destNamespace&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;tools&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;destServer&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;https://kubernetes.default.svc&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcPath&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;apps/reloader/overlays/test&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcRepoURL&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;https://forgejo.mixinet.net/blogops/argocd.git&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcTargetRevision&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;labels&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;annotations&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The <code>config.json</code> file is used to build the Argo CD <code>Application</code> from the <code>ApplicationSet</code> defined for the project (the
definition for the <code>test</code> project can be reviewed
<a href="https://forgejo.mixinet.net/blogops/argocd/src/branch/main/projects/test.yaml" target="_blank" rel="noopener">here</a>), the idea is that we use the
<code>config.json</code> or the <code>config_dir.json</code> file to adjust the application parameters to generate the CRD, which is a lot
shorter and easier that providing the YAML version directly.</p></aside></li></ol></div>
<p>We can check that the application is working using the <code>argocd</code> command line application:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ argocd app get argocd/test-reloader -o tree
Name:               argocd/test-reloader
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          tools
URL:                https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/reloader/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (2893b56)
Health Status:      Healthy

KIND/NAME                                          STATUS  HEALTH   MESSAGE
ClusterRole/reloader-reloader-role                 Synced
ClusterRoleBinding/reloader-reloader-role-binding  Synced
ServiceAccount/reloader-reloader                   Synced           serviceaccount/reloader-reloader created
Deployment/reloader-reloader                       Synced  Healthy  deployment.apps/reloader-reloader created
└─ReplicaSet/reloader-reloader-5b6dcc7b6f                  Healthy
  └─Pod/reloader-reloader-5b6dcc7b6f-vwjcx                 Healthy</span></code></pre></div>
<section class="doc-section level-2"><h3 id="_adding_flags_to_the_reloader_server">Adding flags to the <code>reloader</code> server</h3><p>The runtime configuration flags for the <code>reloader</code> server are described on the project
<a href="https://github.com/stakater/Reloader/blob/master/README.md#6-%EF%B8%8F-optional-runtime-configurations" target="_blank" rel="noopener">README.md</a>
file, in our case we want to adjust three values:</p>
<div class="ulist"><ul><li>We want to enable the option to reload a workload when a <code>ConfigMap</code> or <code>Secret</code> is created,</li><li>We want to enable the option to reload a workload when a <code>ConfigMap</code> or <code>Secret</code> is deleted,</li><li>We want to use the <code>annotations</code> strategy for reloads, as it is the recommended mode of operation when using <code>argocd</code>.</li></ul></div>
<p>To pass them we edit the <code>apps/reloader/overlays/test/kustomization.yaml</code> file to patch the pod container template, the
text added is the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">patches</span><span class="pi">:</span>
<span class="c1"># Add flags to reload workloads when ConfigMaps or Secrets are created or deleted</span>
<span class="pi">-</span> <span class="na">target</span><span class="pi">:</span>
    <span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
    <span class="na">name</span><span class="pi">:</span> <span class="s">reloader-reloader</span>
  <span class="na">patch</span><span class="pi">:</span> <span class="pi">|-</span>
    <span class="s">- op: add</span>
      <span class="s">path: /spec/template/spec/containers/0/args</span>
      <span class="s">value:</span>
        <span class="s">- &#39;--reload-on-create=true&#39;</span>
        <span class="s">- &#39;--reload-on-delete=true&#39;</span>
        <span class="s">- &#39;--reload-strategy=annotations&#39;</span></code></pre></div>
<p>After committing and pushing the updated file the system launches the application with the new options.</p></section></section>
<section class="doc-section level-1"><h2 id="_the_dummyhttp_application">The <code>dummyhttp</code> application</h2><p>To do a quick test we are going to deploy the <a href="https://github.com/svenstaro/dummyhttp/" target="_blank" rel="noopener">dummyhttp</a> web server using an
image generated using the following <code>Dockerfile</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="docker"><span class="c"># Image to run the dummyhttp application &lt;https://github.com/svenstaro/dummyhttp&gt;</span>

<span class="c"># This arg could be passed by the container build command (used with mirrors)</span>
<span class="k">ARG</span><span class="s"> OCI_REGISTRY_PREFIX</span>

<span class="c"># Latest tested version of alpine</span>
<span class="k">FROM</span><span class="s"> ${OCI_REGISTRY_PREFIX}alpine:3.21.3</span>

<span class="c"># Tool versions</span>
<span class="k">ARG</span><span class="s"> DUMMYHTTP_VERS=1.1.1</span>

<span class="c"># Download binary</span>
<span class="k">RUN </span><span class="nv">ARCH</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>apk <span class="nt">--print-arch</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>  <span class="nv">VERS</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$DUMMYHTTP_VERS</span><span class="s2">&#34;</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>  <span class="nv">URL</span><span class="o">=</span><span class="s2">&#34;https://github.com/svenstaro/dummyhttp/releases/download/v</span><span class="nv">$VERS</span><span class="s2">/dummyhttp-</span><span class="nv">$VERS</span><span class="s2">-</span><span class="nv">$ARCH</span><span class="s2">-unknown-linux-musl&#34;</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>  wget <span class="s2">&#34;</span><span class="nv">$URL</span><span class="s2">&#34;</span> <span class="nt">-O</span> <span class="s2">&#34;/tmp/dummyhttp&#34;</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span>  <span class="nb">install</span> /tmp/dummyhttp /usr/local/bin <span class="o">&amp;&amp;</span> <span class="se">\
</span>  <span class="nb">rm</span> <span class="nt">-f</span> /tmp/dummyhttp

<span class="c"># Set the entrypoint to /usr/local/bin/dummyhttp</span>
<span class="k">ENTRYPOINT</span><span class="s"> [ &#34;/usr/local/bin/dummyhttp&#34; ]</span></code></pre></div>
<p>The <code>kustomize</code> base application is available on a <code>monorepo</code> that contains the following files:</p>
<div class="olist arabic"><ol class="arabic"><li><p>A <code>Deployment</code> definition that uses the previous image but uses <code>/bin/sh -c</code> as its <code>entrypoint</code> (<code>command</code> in the
k8s <code>Pod</code> terminology) and passes as its argument a string that runs the <code>eval</code> command to be able to expand
environment variables passed to the <code>pod</code> (the definition includes two optional variables, one taken from a
<code>ConfigMap</code> and another one from a <code>Secret</code>):</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">apps/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Deployment</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app</span><span class="pi">:</span> <span class="s">dummyhttp</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">selector</span><span class="pi">:</span>
    <span class="na">matchLabels</span><span class="pi">:</span>
      <span class="na">app</span><span class="pi">:</span> <span class="s">dummyhttp</span>
  <span class="na">template</span><span class="pi">:</span>
    <span class="na">metadata</span><span class="pi">:</span>
      <span class="na">labels</span><span class="pi">:</span>
        <span class="na">app</span><span class="pi">:</span> <span class="s">dummyhttp</span>
    <span class="na">spec</span><span class="pi">:</span>
      <span class="na">containers</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/dummyhttp:1.0.0</span>
        <span class="na">command</span><span class="pi">:</span> <span class="pi">[</span> <span class="s2">&#34;</span><span class="s">/bin/sh&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">-c&#34;</span> <span class="pi">]</span>
        <span class="na">args</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">eval</span><span class="nv"> </span><span class="s">dummyhttp</span><span class="nv"> </span><span class="s">-b</span><span class="nv"> </span><span class="s">\&#34;{\\\&#34;c\\\&#34;:</span><span class="nv"> </span><span class="s">\\\&#34;$CM_VAR\\\&#34;,</span><span class="nv"> </span><span class="s">\\\&#34;s\\\&#34;:</span><span class="nv"> </span><span class="s">\\\&#34;$SECRET_VAR\\\&#34;}\&#34;&#39;</span>
        <span class="na">ports</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="m">8080</span>
        <span class="na">env</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">CM_VAR</span>
          <span class="na">valueFrom</span><span class="pi">:</span>
            <span class="na">configMapKeyRef</span><span class="pi">:</span>
              <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp-configmap</span>
              <span class="na">key</span><span class="pi">:</span> <span class="s">CM_VAR</span>
              <span class="na">optional</span><span class="pi">:</span> <span class="kc">true</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">SECRET_VAR</span>
          <span class="na">valueFrom</span><span class="pi">:</span>
            <span class="na">secretKeyRef</span><span class="pi">:</span>
              <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp-secret</span>
              <span class="na">key</span><span class="pi">:</span> <span class="s">SECRET_VAR</span>
              <span class="na">optional</span><span class="pi">:</span> <span class="kc">true</span></code></pre></div></li><li><p>A <code>Service</code> that publishes the previous <code>Deployment</code> (the only relevant thing to mention is that the web server uses
the port <code>8080</code> by default):</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">selector</span><span class="pi">:</span>
    <span class="na">app</span><span class="pi">:</span> <span class="s">dummyhttp</span>
  <span class="na">ports</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">http</span>
    <span class="na">port</span><span class="pi">:</span> <span class="m">80</span>
    <span class="na">targetPort</span><span class="pi">:</span> <span class="s">8080</span></code></pre></div></li><li><p>An <code>Ingress</code> definition to allow access to the application from the outside:</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">networking.k8s.io/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Ingress</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
  <span class="na">annotations</span><span class="pi">:</span>
    <span class="na">traefik.ingress.kubernetes.io/router.tls</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">true&#34;</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="na">host</span><span class="pi">:</span> <span class="s">dummyhttp.localhost.mixinet.net</span>
      <span class="na">http</span><span class="pi">:</span>
        <span class="na">paths</span><span class="pi">:</span>
          <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">/</span>
            <span class="na">pathType</span><span class="pi">:</span> <span class="s">Prefix</span>
            <span class="na">backend</span><span class="pi">:</span>
              <span class="na">service</span><span class="pi">:</span>
                <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp</span>
                <span class="na">port</span><span class="pi">:</span>
                  <span class="na">number</span><span class="pi">:</span> <span class="s">80</span></code></pre></div></li><li><p>And the <code>kustomization.yaml</code> file that includes the previous files:</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>

<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">deployment.yaml</span>
<span class="pi">-</span> <span class="s">service.yaml</span>
<span class="pi">-</span> <span class="s">ingress.yaml</span></code></pre></div></li></ol></div>
<section class="doc-section level-2"><h3 id="_deploying_the_dummyhttp_application_from_argocd">Deploying the <code>dummyhttp</code> application from <code>argocd</code></h3><p>We could create the <code>dummyhttp</code> application using the <code>argocd-autopilot</code> command as we’ve done on the <code>reloader</code> case,
but we are going to do it manually to show how simple it is.</p>
<p>First we’ve created the <code>apps/dummyhttp/base/kustomization.yaml</code> file to include the application from the previous
repository:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">resources</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0</span></code></pre></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>As we are not using github.com to publish the application we add the resource using the full <code>https</code> URI, splitting the
PATH from the repository base address and including the <code>tag</code> we want to use.</p></aside>
<p>As a second step we create the <code>apps/dummyhttp/overlays/test/kustomization.yaml</code> file to include the previous file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">../../base</span></code></pre></div>
<p>And finally we add the <code>apps/dummyhttp/overlays/test/config.json</code> file to configure the application as the
<code>ApplicationSet</code> defined by <code>argocd-autopilot</code> expects:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;appName&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;dummyhttp&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;userGivenName&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;dummyhttp&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;destNamespace&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;default&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;destServer&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;https://kubernetes.default.svc&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcPath&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;apps/dummyhttp/overlays/test&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcRepoURL&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;https://forgejo.mixinet.net/blogops/argocd.git&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcTargetRevision&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;labels&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;annotations&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="w">
</span><span class="p">}</span></code></pre></div>
<p>Once we have the three files we commit and push the changes and <code>argocd</code> deploys the application; we can check that
things are working using <code>curl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ curl -s https://dummyhttp.lo.mixinet.net:8443/ | jq -M .
{
  &#34;c&#34;: &#34;&#34;,
  &#34;s&#34;: &#34;&#34;
}</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_patching_the_application">Patching the application</h3><p>Now we will add patches to the <code>apps/dummyhttp/overlays/test/kustomization.yaml</code> file:</p>
<div class="ulist"><ul><li>One to add annotations for <code>reloader</code> (one to enable it and another one to set the roll out strategy to <code>restart</code> to
avoid touching the <code>deployments</code>, as that can generate issues with <code>argocd</code>).</li><li>Another to change the ingress <code>hostname</code> (not really needed, but something quite reasonable for a specific <code>project</code>).</li></ul></div>
<p>The file diff is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="gd">--- a/apps/dummyhttp/overlays/test/kustomization.yaml
</span><span class="gi">+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
</span><span class="p">@@ -2,3 +2,22 @@</span> apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
<span class="gi">+patches:
+# Add reloader annotations
+- target:
+    kind: Deployment
+    name: dummyhttp
+  patch: |-
+    - op: add
+      path: /metadata/annotations
+      value:
+        reloader.stakater.com/auto: &#34;true&#34;
+        reloader.stakater.com/rollout-strategy: &#34;restart&#34;
+# Change the ingress host name
+- target:
+    kind: Ingress
+    name: dummyhttp
+  patch: |-
+    - op: replace
+      path: /spec/rules/0/host
+      value: test-dummyhttp.lo.mixinet.net</span></code></pre></div>
<p>After committing and pushing the changes we can use the <code>argocd</code> cli to check the status of the application:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ argocd app get argocd/test-dummyhttp -o tree
Name:               argocd/test-dummyhttp
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          default
URL:                https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/dummyhttp/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (fbc6031)
Health Status:      Healthy

KIND/NAME                           STATUS  HEALTH   MESSAGE
Deployment/dummyhttp                Synced  Healthy  deployment.apps/dummyhttp configured
└─ReplicaSet/dummyhttp-55569589bc           Healthy
  └─Pod/dummyhttp-55569589bc-qhnfk          Healthy
Ingress/dummyhttp                   Synced  Healthy  ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp                   Synced  Healthy  service/dummyhttp unchanged
├─Endpoints/dummyhttp
└─EndpointSlice/dummyhttp-x57bl</span></code></pre></div>
<p>As we can see, the <code>Deployment</code> and <code>Ingress</code> where updated, but the <code>Service</code> is unchanged.</p>
<p>To validate that the ingress is using the new <code>hostname</code> we can use <code>curl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443/
{&#34;c&#34;: &#34;&#34;, &#34;s&#34;: &#34;&#34;}</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_adding_a_configmap">Adding a <code>ConfigMap</code></h3><p>Now that  the system is adjusted to reload the application when the <code>ConfigMap</code> or <code>Secret</code> is created, deleted or
updated we are ready to add one file and see how the system reacts.</p>
<p>We modify the <code>apps/dummyhttp/overlays/test/kustomization.yaml</code> file to create the <code>ConfigMap</code> using the
<code>configMapGenerator</code> as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="gd">--- a/apps/dummyhttp/overlays/test/kustomization.yaml
</span><span class="gi">+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
</span><span class="p">@@ -2,6 +2,14 @@</span> apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
<span class="gi">+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+  literals:
+  - CM_VAR=&#34;Default Test Value&#34;
+  behavior: create
+  options:
+    disableNameSuffixHash: true
</span> patches:
 # Add reloader annotations
 - target:</code></pre></div>
<p>After committing and pushing the changes we can see that the <code>ConfigMap</code> is available, the pod has been deleted and
started again and the <code>curl</code> output includes the new value:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl get configmaps,pods
NAME                             READY   STATUS        RESTARTS   AGE
configmap/dummyhttp-configmap   1      11s
configmap/kube-root-ca.crt      1      4d7h

NAME                            DATA   AGE
pod/dummyhttp-779c96c44b-pjq4d   1/1     Running       0          11s
pod/dummyhttp-fc964557f-jvpkx    1/1     Terminating   0          2m42s
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  &#34;c&#34;: &#34;Default Test Value&#34;,
  &#34;s&#34;: &#34;&#34;
}</span></code></pre></div></section></section>
<section class="doc-section level-1"><h2 id="_using_helm_with_argocd_autopilot">Using <code>helm</code> with <code>argocd-autopilot</code></h2><p>Right now there is no direct support in <code>argocd-autopilot</code> to manage applications using <code>helm</code> (see the issue
<a href="https://github.com/argoproj-labs/argocd-autopilot/issues/38" target="_blank" rel="noopener">#38</a> on the project), but we want to use a chart in our
next example.</p>
<p>There are multiple ways to add the support, but the simplest one that allows us to keep using <code>argocd-autopilot</code> is to
use <code>kustomize</code> applications that call <code>helm</code> as described
<a href="https://github.com/argoproj-labs/argocd-autopilot/issues/38#issuecomment-1117961569" target="_blank" rel="noopener">here</a>.</p>
<p>The only thing needed before being able to use the approach is to add the <code>kustomize.buildOptions</code> flag to the
<code>argocd-cm</code> on the <code>bootstrap/argo-cd/kustomization.yaml</code> file, its contents now are follows:</p>
<figure class="listing-block"><figcaption>bootstrap/argo-cd/kustomization.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">configMapGenerator</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">behavior</span><span class="pi">:</span> <span class="s">merge</span>
  <span class="na">literals</span><span class="pi">:</span>
  <span class="c1"># Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)</span>
  <span class="pi">-</span> <span class="s">kustomize.buildOptions=&#34;--enable-helm&#34;</span>
  <span class="pi">-</span> <span class="pi">|</span>
    <span class="s">repository.credentials=- passwordSecret:</span>
        <span class="s">key: git_token</span>
        <span class="s">name: autopilot-secret</span>
      <span class="s">url: https://forgejo.mixinet.net/</span>
      <span class="s">usernameSecret:</span>
        <span class="s">key: git_username</span>
        <span class="s">name: autopilot-secret</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">argocd-cm</span>
  <span class="c1"># Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)</span>
<span class="pi">-</span> <span class="na">behavior</span><span class="pi">:</span> <span class="s">merge</span>
  <span class="na">literals</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">server.insecure=true&#34;</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">argocd-cmd-params-cm</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">namespace</span><span class="pi">:</span> <span class="s">argocd</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19</span>
<span class="pi">-</span> <span class="s">ingress_route.yaml</span></code></pre></figure>
<p>On the following section we will explain how the application is defined to make things work.</p></section>
<section class="doc-section level-1"><h2 id="_installing_the_sealed_secrets_controller">Installing the <code>sealed-secrets</code> controller</h2><p>To manage <code>secrets</code> in our cluster we are going to use the
<a href="https://github.com/bitnami-labs/sealed-secrets" target="_blank" rel="noopener">sealed-secrets</a> controller and to install it we are going to use its
chart.</p>
<p>As we mentioned on the previous section, the idea is to create a <code>kustomize</code> application and use that to deploy the
chart, but we are going to create the files manually, as we are not going import the base <code>kustomization</code> files from a
remote repository.</p>
<p>As there is no clear way to <a href="https://github.com/kubernetes-sigs/kustomize/issues/4658" target="_blank" rel="noopener">override helm Chart values using
overlays</a> we are going to use a generator to create the helm configuration from an external resource and include it
from our overlays (the idea has been taken from this <a href="https://github.com/wmiller112/kustomize-helm-values/" target="_blank" rel="noopener">repository</a>,
which was referenced from a <a href="https://github.com/kubernetes-sigs/kustomize/issues/4658#issuecomment-2603392649" target="_blank" rel="noopener">comment</a>
on the <code>kustomize</code> issue <a href="https://github.com/argoproj-labs/argocd-autopilot/issues/38" target="_blank" rel="noopener">#38</a> mentioned earlier).</p>
<section class="doc-section level-2"><h3 id="_the_sealed_secrets_application">The <code>sealed-secrets</code> application</h3><p>We have created the following files and folders manually:</p>
<div class="listing-block"><pre class="rouge highlight"><code>apps/sealed-secrets/
├── helm
│   ├── chart.yaml
│   └── kustomization.yaml
└── overlays
    └── test
        ├── config.json
        ├── kustomization.yaml
        └── values.yaml</code></pre></div>
<p>The <code>helm</code> folder contains the <code>generator</code> template that will be included from our <code>overlays</code>.</p>
<p>The <code>kustomization.yaml</code> includes the <code>chart.yaml</code> as a resource:</p>
<figure class="listing-block"><figcaption>apps/sealed-secrets/helm/kustomization.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="na">resources</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">chart.yaml</span></code></pre></figure>
<p>And the <code>chart.yaml</code> file defines the <code>HelmChartInflationGenerator</code>:</p>
<figure class="listing-block"><figcaption>apps/sealed-secrets/helm/chart.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">builtin</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">HelmChartInflationGenerator</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">sealed-secrets</span>
<span class="na">releaseName</span><span class="pi">:</span> <span class="s">sealed-secrets</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">sealed-secrets</span>
<span class="na">namespace</span><span class="pi">:</span> <span class="s">kube-system</span>
<span class="na">repo</span><span class="pi">:</span> <span class="s">https://bitnami-labs.github.io/sealed-secrets</span>
<span class="na">version</span><span class="pi">:</span> <span class="s">2.17.2</span>
<span class="na">includeCRDs</span><span class="pi">:</span> <span class="kc">true</span>
<span class="c1"># Add common values to all argo-cd projects inline</span>
<span class="na">valuesInline</span><span class="pi">:</span>
  <span class="na">fullnameOverride</span><span class="pi">:</span> <span class="s">sealed-secrets-controller</span>
<span class="c1"># Load a values.yaml file from the same directory that uses this generator</span>
<span class="na">valuesFile</span><span class="pi">:</span> <span class="s">values.yaml</span></code></pre></figure>
<p>For this chart the template adjusts the <code>namespace</code> to <code>kube-system</code> and adds the <code>fullnameOverride</code> on the
<code>valuesInline</code> key because we want to use those settings on all the <code>projects</code> (they are the values expected by the
<code>kubeseal</code> command line application, so we adjust them to avoid the need to add additional parameters to it).</p>
<p>We adjust global values as <em>inline</em> to be able to use a the <code>valuesFile</code> from our overlays; as we are using a generator
the path is relative to the folder that contains the <code>kustomization.yaml</code> file that calls it, in our case we will need
to have a <code>values.yaml</code> file on each <code>overlay</code> folder (if we don’t want to overwrite any values for a project we can
create an empty file, but it has to exist).</p>
<p>Finally, our overlay folder contains three files, a <code>kustomization.yaml</code> file that includes the generator from the
<code>helm</code> folder, the <code>values.yaml</code> file needed by the chart and the <code>config.json</code> file used by <code>argocd-autopilot</code> to
install the application.</p>
<p>The <code>kustomization.yaml</code> file contents are:</p>
<figure class="listing-block"><figcaption>apps/sealed-secrets/overlays/test/kustomization.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">kustomize.config.k8s.io/v1beta1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Kustomization</span>
<span class="c1"># Uncomment if you want to add additional resources using kustomize</span>
<span class="c1">#resources:</span>
<span class="c1">#- ../../base</span>
<span class="na">generators</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">../../helm</span></code></pre></figure>
<p>The <code>values.yaml</code> file enables the <code>ingress</code> for the application and adjusts its <code>hostname</code>:</p>
<figure class="listing-block"><figcaption>apps/sealed-secrets/overlays/test/values.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">ingress</span><span class="pi">:</span>
  <span class="na">enabled</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">hostname</span><span class="pi">:</span> <span class="s">test-sealed-secrets.lo.mixinet.net</span></code></pre></figure>
<p>And the <code>config.json</code> file is similar to the ones used with the other applications we have installed:</p>
<figure class="listing-block"><figcaption>apps/sealed-secrets/overlays/test/config.json</figcaption>
<pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;appName&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;sealed-secrets&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;userGivenName&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;sealed-secrets&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;destNamespace&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;kube-system&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;destServer&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;https://kubernetes.default.svc&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcPath&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;apps/sealed-secrets/overlays/test&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcRepoURL&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;https://forgejo.mixinet.net/blogops/argocd.git&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;srcTargetRevision&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;labels&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;annotations&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">null</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p>Once we commit and push the files the <code>sealed-secrets</code> application is installed in our cluster, we can check it using
<code>curl</code> to get the public certificate used by it:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">❯ curl <span class="nt">-s</span> https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
<span class="nt">-----BEGIN</span> CERTIFICATE-----
<span class="o">[</span>...]
<span class="nt">-----END</span> CERTIFICATE-----</code></pre></div></section>
<section class="doc-section level-2"><h3 id="_the_dummyhttp_secret">The <code>dummyhttp-secret</code></h3><p>To create sealed secrets we need to install the <code>kubeseal</code> tool:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ arkade get kubeseal</span></code></pre></div>
<p>Now we create a local version of the <code>dummyhttp-secret</code> that contains some value on the <code>SECRET_VAR</code> key (the easiest
way for doing it is to use <code>kubectl</code>):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ echo -n &#34;Boo&#34; | kubectl create secret generic dummyhttp-secret \
    --dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
</span><span class="gp">    &gt;</span>/tmp/dummyhttp-secret.yaml</code></pre></div>
<p>The secret definition in yaml format is:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">data</span><span class="pi">:</span>
  <span class="na">SECRET_VAR</span><span class="pi">:</span> <span class="s">Qm9v</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Secret</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">creationTimestamp</span><span class="pi">:</span> <span class="kc">null</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp-secret</span></code></pre></div>
<p>To create a sealed version using the <code>kubeseal</code> tool we can do the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml</span></code></pre></div>
<p>That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart
to use the <code>kube-system</code> namespace and set the controller name to <code>sealed-secrets-controller</code> as the tool expects.</p>
<p>If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the
public key:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
    --cert https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem</span></code></pre></div>
<p>Or, if we don’t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.</p>
<p>The sealed version of the secret looks like this:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">bitnami.com/v1alpha1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">SealedSecret</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">creationTimestamp</span><span class="pi">:</span> <span class="kc">null</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp-secret</span>
  <span class="na">namespace</span><span class="pi">:</span> <span class="s">default</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">encryptedData</span><span class="pi">:</span>
    <span class="na">SECRET_VAR</span><span class="pi">:</span> <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>
  <span class="na">template</span><span class="pi">:</span>
    <span class="na">metadata</span><span class="pi">:</span>
      <span class="na">creationTimestamp</span><span class="pi">:</span> <span class="kc">null</span>
      <span class="na">name</span><span class="pi">:</span> <span class="s">dummyhttp-secret</span>
      <span class="na">namespace</span><span class="pi">:</span> <span class="s">default</span></code></pre></div>
<p>This file can be deployed to the cluster to create the secret (in our case we will add it to the <code>argocd</code> application),
but before doing that we are going to check the output of our <code>dummyhttp</code> service and get the list of <code>Secrets</code> and
<code>SealedSecrets</code> in the default namespace:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  &#34;c&#34;: &#34;Default Test Value&#34;,
  &#34;s&#34;: &#34;&#34;
}
❯ kubectl get sealedsecrets,secrets
No resources found in default namespace.</span></code></pre></div>
<p>Now we add the <code>SealedSecret</code> to the <code>dummyapp</code> copying the file and adding it to the <code>kustomization.yaml</code> file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="gd">--- a/apps/dummyhttp/overlays/test/kustomization.yaml
</span><span class="gi">+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
</span><span class="p">@@ -2,6 +2,7 @@</span> apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
<span class="gi">+- dummyhttp-sealed-secret.yaml
</span> # Create the config map value
 configMapGenerator:
 - name: dummyhttp-configmap</code></pre></div>
<p>Once we commit and push the files Argo CD creates the <code>SealedSecret</code> and the controller generates the <code>Secret</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
❯ kubectl get sealedsecrets,secrets
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     3s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      3s</span></code></pre></div>
<p>If we check the command output we can see the new value of the secret:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  &#34;c&#34;: &#34;Default Test Value&#34;,
  &#34;s&#34;: &#34;Boo&#34;
}</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_using_sealed_secrets_in_production_clusters">Using <code>sealed-secrets</code> in production clusters</h3><p>If you plan to use <code>sealed-secrets</code> look into its
<a href="https://github.com/bitnami-labs/sealed-secrets/blob/main/README.md" target="_blank" rel="noopener">documentation</a> to understand how it manages the
private keys, how to backup things and keep in mind that, as the documentation
<a href="https://github.com/bitnami-labs/sealed-secrets/blob/main/README.md#re-encryption-advanced" target="_blank" rel="noopener">explains</a>, you can rotate
your sealed version of the secrets, but that doesn’t change the actual secrets.</p>
<p>If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the
controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing
both things at the same time).</p></section></section>
<section class="doc-section level-1"><h2 id="_final_remarks">Final remarks</h2><p>On this post we have seen how to deploy applications using the <code>argocd-autopilot</code> model, including the use of <code>helm</code>
charts inside <code>kustomize</code> applications and how to install and use the <code>sealed-secrets</code> controller.</p>
<p>It has been interesting and I’ve learnt a lot about <code>argocd</code> in the process, but I believe that if I ever want to use it
in production I will also review the native <code>helm</code> support in <code>argocd</code> using a separate repository to manage the
applications, at least to be able to compare it to the model explained here.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>ArgoCD Autopilot</title>
      <link>https://blogops.mixinet.net/posts/gitops/argocd-autopilot/</link>
      <pubDate>Mon, 28 Apr 2025 09:50:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/gitops/argocd-autopilot/</guid>
      <description>&lt;p&gt;For a long time I’ve been wanting to try GitOps tools, but I haven’t had the chance to try them for real on the projects
I was working on.&lt;/p&gt;
&lt;p&gt;As now I have some spare time I’ve decided I’m going to play a little with &lt;a href=&#34;https://argo-cd.readthedocs.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Argo CD&lt;/a&gt;,
&lt;a href=&#34;https://fluxcd.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Flux&lt;/a&gt; and &lt;a href=&#34;https://kluctl.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Kluctl&lt;/a&gt; to test them and be able to use one of them in a real project
in the future if it looks appropriate.&lt;/p&gt;
&lt;p&gt;On this post I will use &lt;a href=&#34;https://argocd-autopilot.readthedocs.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Argo-CD Autopilot&lt;/a&gt; to install &lt;code&gt;argocd&lt;/code&gt; on a
&lt;a href=&#34;https://k3d.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;k3d&lt;/a&gt; local cluster installed using &lt;a href=&#34;https://opentofu.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;OpenTofu&lt;/a&gt; to test the autopilot approach of
managing &lt;code&gt;argocd&lt;/code&gt; and test the tool (as it manages &lt;code&gt;argocd&lt;/code&gt; using a git repository it can be used to test &lt;code&gt;argocd&lt;/code&gt; as
well).&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_installing_tools_locally_with_arkade&#34;&gt;Installing tools locally with &lt;code&gt;arkade&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;Recently I’ve been using the &lt;a href=&#34;https://github.com/alexellis/arkade&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;arkade&lt;/a&gt; tool to install &lt;code&gt;kubernetes&lt;/code&gt; related
applications on Linux servers and containers, I usually get the applications with it and install them on the
&lt;code&gt;/usr/local/bin&lt;/code&gt; folder.&lt;/p&gt;
&lt;p&gt;For this post I’ve created a simple script that checks if the tools I’ll be using are available and installs them on the
&lt;code&gt;$HOME/.arkade/bin&lt;/code&gt; folder if missing (I’m assuming that &lt;code&gt;docker&lt;/code&gt; is already available, as it is not installable with
&lt;code&gt;arkade&lt;/code&gt;):&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;c&#34;&gt;#!/bin/sh&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# TOOLS LIST&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;ARKADE_APPS&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;argocd argocd-autopilot k3d kubectl sops tofu&amp;#34;&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Add the arkade binary directory to the path if missing&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;case&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;:&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;PATH&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;:&amp;#34;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;in&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;*&lt;/span&gt;:&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;HOME&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;/.arkade/bin&amp;#34;&lt;/span&gt;:&lt;span class=&#34;k&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;;;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;*&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;)&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;export &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;PATH&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;PATH&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;HOME&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;/.arkade/bin&amp;#34;&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;;;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;esac&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Install or update arkade&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;if &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;command&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-v&lt;/span&gt; arkade &lt;span class=&#34;o&#34;&gt;&amp;gt;&lt;/span&gt;/dev/null&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
  &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;Trying to update the arkade application&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;nb&#34;&gt;sudo &lt;/span&gt;arkade update
&lt;span class=&#34;k&#34;&gt;else
  &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;Installing the arkade application&amp;#34;&lt;/span&gt;
  curl &lt;span class=&#34;nt&#34;&gt;-sLS&lt;/span&gt; https://get.arkade.dev | &lt;span class=&#34;nb&#34;&gt;sudo &lt;/span&gt;sh
&lt;span class=&#34;k&#34;&gt;fi

&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;Installing tools with arkade&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;for &lt;/span&gt;app &lt;span class=&#34;k&#34;&gt;in&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;$ARKADE_APPS&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;do
  &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;app_path&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;$(&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;command&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-v&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;$app&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;||&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;true
  &lt;/span&gt;&lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$app_path&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
    &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;The application &amp;#39;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$app&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#39; already available on &amp;#39;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$app_path&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#39;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;else
    &lt;/span&gt;arkade get &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$app&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;fi
done

&lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;cat&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&#34;no&#34;&gt;EOF&lt;/span&gt;&lt;span class=&#34;sh&#34;&gt;

Add the ~/.arkade/bin directory to your PATH if tools have been installed there

EOF&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>For a long time I’ve been wanting to try GitOps tools, but I haven’t had the chance to try them for real on the projects
I was working on.</p>
<p>As now I have some spare time I’ve decided I’m going to play a little with <a href="https://argo-cd.readthedocs.io/" target="_blank" rel="noopener">Argo CD</a>,
<a href="https://fluxcd.io/" target="_blank" rel="noopener">Flux</a> and <a href="https://kluctl.io/" target="_blank" rel="noopener">Kluctl</a> to test them and be able to use one of them in a real project
in the future if it looks appropriate.</p>
<p>On this post I will use <a href="https://argocd-autopilot.readthedocs.io/" target="_blank" rel="noopener">Argo-CD Autopilot</a> to install <code>argocd</code> on a
<a href="https://k3d.io/" target="_blank" rel="noopener">k3d</a> local cluster installed using <a href="https://opentofu.org/" target="_blank" rel="noopener">OpenTofu</a> to test the autopilot approach of
managing <code>argocd</code> and test the tool (as it manages <code>argocd</code> using a git repository it can be used to test <code>argocd</code> as
well).</p>
<section class="doc-section level-1"><h2 id="_installing_tools_locally_with_arkade">Installing tools locally with <code>arkade</code></h2><p>Recently I’ve been using the <a href="https://github.com/alexellis/arkade" target="_blank" rel="noopener">arkade</a> tool to install <code>kubernetes</code> related
applications on Linux servers and containers, I usually get the applications with it and install them on the
<code>/usr/local/bin</code> folder.</p>
<p>For this post I’ve created a simple script that checks if the tools I’ll be using are available and installs them on the
<code>$HOME/.arkade/bin</code> folder if missing (I’m assuming that <code>docker</code> is already available, as it is not installable with
<code>arkade</code>):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="c"># TOOLS LIST</span>
<span class="nv">ARKADE_APPS</span><span class="o">=</span><span class="s2">&#34;argocd argocd-autopilot k3d kubectl sops tofu&#34;</span>

<span class="c"># Add the arkade binary directory to the path if missing</span>
<span class="k">case</span> <span class="s2">&#34;:</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:&#34;</span> <span class="k">in</span>
  <span class="k">*</span>:<span class="s2">&#34;</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span>:<span class="k">*</span><span class="p">)</span> <span class="p">;;</span>
  <span class="k">*</span><span class="p">)</span> <span class="nb">export </span><span class="nv">PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span> <span class="p">;;</span>
<span class="k">esac</span>

<span class="c"># Install or update arkade</span>
<span class="k">if </span><span class="nb">command</span> <span class="nt">-v</span> arkade <span class="o">&gt;</span>/dev/null<span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;Trying to update the arkade application&#34;</span>
  <span class="nb">sudo </span>arkade update
<span class="k">else
  </span><span class="nb">echo</span> <span class="s2">&#34;Installing the arkade application&#34;</span>
  curl <span class="nt">-sLS</span> https://get.arkade.dev | <span class="nb">sudo </span>sh
<span class="k">fi

</span><span class="nb">echo</span> <span class="s2">&#34;&#34;</span>
<span class="nb">echo</span> <span class="s2">&#34;Installing tools with arkade&#34;</span>
<span class="nb">echo</span> <span class="s2">&#34;&#34;</span>
<span class="k">for </span>app <span class="k">in</span> <span class="nv">$ARKADE_APPS</span><span class="p">;</span> <span class="k">do
  </span><span class="nv">app_path</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">command</span> <span class="nt">-v</span> <span class="nv">$app</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true
  </span><span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$app_path</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;The application &#39;</span><span class="nv">$app</span><span class="s2">&#39; already available on &#39;</span><span class="nv">$app_path</span><span class="s2">&#39;&#34;</span>
  <span class="k">else
    </span>arkade get <span class="s2">&#34;</span><span class="nv">$app</span><span class="s2">&#34;</span>
  <span class="k">fi
done

</span><span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">

Add the ~/.arkade/bin directory to your PATH if tools have been installed there

EOF</span></code></pre></div>
<p>The rest of scripts will add the binary directory to the PATH if missing to make sure things work if something was
installed there.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The <code>arkade</code> command has support to install applications using <code>helm</code> or YAML manifests besides downloading applications
and it can probably be useful for quick tests, but I prefer to manage the deployments myself, so I have not used that
part.</p></aside></section>
<section class="doc-section level-1"><h2 id="_creating_a_k3d_cluster_with_opentofu">Creating a k3d cluster with <code>opentofu</code></h2><p>Although using <code>k3d</code> directly will be a good choice for the creation of the cluster, I’m using <code>tofu</code> to do it because
that will probably be the tool used to do it if we were working with Cloud Platforms like AWS or Google.</p>
<p>The <code>main.tf</code> file is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="terraform"><span class="k">terraform</span> <span class="p">{</span>
  <span class="nx">required_providers</span> <span class="p">{</span>
    <span class="nx">k3d</span> <span class="o">=</span> <span class="p">{</span>
      <span class="nx">source</span>  <span class="o">=</span> <span class="s2">&#34;moio/k3d&#34;</span>
      <span class="nx">version</span> <span class="o">=</span> <span class="s2">&#34;0.0.12&#34;</span>
    <span class="p">}</span>
    <span class="nx">sops</span> <span class="o">=</span> <span class="p">{</span>
      <span class="nx">source</span> <span class="o">=</span> <span class="s2">&#34;carlpett/sops&#34;</span>
      <span class="nx">version</span> <span class="o">=</span> <span class="s2">&#34;1.2.0&#34;</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>

<span class="k">data</span> <span class="s2">&#34;sops_file&#34;</span> <span class="s2">&#34;secrets&#34;</span> <span class="p">{</span>
    <span class="nx">source_file</span> <span class="o">=</span> <span class="s2">&#34;secrets.yaml&#34;</span>
<span class="p">}</span>

<span class="k">resource</span> <span class="s2">&#34;k3d_cluster&#34;</span> <span class="s2">&#34;argocd_cluster&#34;</span> <span class="p">{</span>
  <span class="nx">name</span>    <span class="o">=</span> <span class="s2">&#34;argocd&#34;</span>
  <span class="nx">servers</span> <span class="o">=</span> <span class="mi">1</span>
  <span class="nx">agents</span>  <span class="o">=</span> <span class="mi">2</span>

  <span class="nx">image</span>   <span class="o">=</span> <span class="s2">&#34;rancher/k3s:v1.31.5-k3s1&#34;</span>
  <span class="nx">network</span> <span class="o">=</span> <span class="s2">&#34;argocd&#34;</span>
  <span class="nx">token</span>   <span class="o">=</span> <span class="k">data</span><span class="err">.</span><span class="nx">sops_file</span><span class="p">.</span><span class="nx">secrets</span><span class="p">.</span><span class="k">data</span><span class="p">[</span><span class="s2">&#34;token&#34;</span><span class="p">]</span>

  <span class="nx">port</span> <span class="p">{</span>
    <span class="nx">host_port</span>      <span class="o">=</span> <span class="mi">8443</span>
    <span class="nx">container_port</span> <span class="o">=</span> <span class="mi">443</span>
    <span class="nx">node_filters</span> <span class="o">=</span> <span class="p">[</span>
      <span class="s2">&#34;loadbalancer&#34;</span><span class="p">,</span>
    <span class="p">]</span>
  <span class="p">}</span>

  <span class="nx">k3d</span> <span class="p">{</span>
    <span class="nx">disable_load_balancer</span>     <span class="o">=</span> <span class="kc">false</span>
    <span class="nx">disable_image_volume</span>      <span class="o">=</span> <span class="kc">false</span>
  <span class="p">}</span>

  <span class="nx">kubeconfig</span> <span class="p">{</span>
    <span class="nx">update_default_kubeconfig</span> <span class="o">=</span> <span class="kc">true</span>
    <span class="nx">switch_current_context</span>    <span class="o">=</span> <span class="kc">true</span>
  <span class="p">}</span>

  <span class="nx">runtime</span> <span class="p">{</span>
    <span class="nx">gpu_request</span> <span class="o">=</span> <span class="s2">&#34;all&#34;</span>
  <span class="p">}</span>
<span class="p">}</span></code></pre></div>
<p>The <code>k3d</code> configuration is quite simple, as I plan to use the default <code>traefik</code> ingress controller with TLS I publish
the <code>443</code> port on the hosts <code>8443</code> port, I’ll explain how I add a valid certificate on the next step.</p>
<p>I’ve prepared the following script to <em>initialize</em> and <em>apply</em> the changes:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># VARIABLES</span>
<span class="c"># Default token for the argocd cluster</span>
<span class="nv">K3D_CLUSTER_TOKEN</span><span class="o">=</span><span class="s2">&#34;argocdToken&#34;</span>
<span class="c"># Relative PATH to install the k3d cluster using terr-iaform</span>
<span class="nv">K3D_TF_RELPATH</span><span class="o">=</span><span class="s2">&#34;k3d-tf&#34;</span>
<span class="c"># Secrets yaml file</span>
<span class="nv">SECRETS_YAML</span><span class="o">=</span><span class="s2">&#34;secrets.yaml&#34;</span>
<span class="c"># Relative PATH to the workdir from the script directory</span>
<span class="nv">WORK_DIR_RELPATH</span><span class="o">=</span><span class="s2">&#34;..&#34;</span>

<span class="c"># Compute WORKDIR</span>
<span class="nv">SCRIPT</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">readlink</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$0</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">SCRIPT_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">dirname</span> <span class="s2">&#34;</span><span class="nv">$SCRIPT</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">WORK_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">readlink</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$SCRIPT_DIR</span><span class="s2">/</span><span class="nv">$WORK_DIR_RELPATH</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>

<span class="c"># Update the PATH to add the arkade bin directory</span>
<span class="c"># Add the arkade binary directory to the path if missing</span>
<span class="k">case</span> <span class="s2">&#34;:</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:&#34;</span> <span class="k">in</span>
  <span class="k">*</span>:<span class="s2">&#34;</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span>:<span class="k">*</span><span class="p">)</span> <span class="p">;;</span>
  <span class="k">*</span><span class="p">)</span> <span class="nb">export </span><span class="nv">PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span> <span class="p">;;</span>
<span class="k">esac</span>

<span class="c"># Go to the k3d-tf dir</span>
<span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$WORK_DIR</span><span class="s2">/</span><span class="nv">$K3D_TF_RELPATH</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">exit </span>1

<span class="c"># Create secrets.yaml file and encode it with sops if missing</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$SECRETS_YAML</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;token: </span><span class="nv">$K3D_CLUSTER_TOKEN</span><span class="s2">&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$SECRETS_YAML</span><span class="s2">&#34;</span>
  sops encrypt <span class="nt">-i</span> <span class="s2">&#34;</span><span class="nv">$SECRETS_YAML</span><span class="s2">&#34;</span>
<span class="k">fi</span>

<span class="c"># Initialize terraform</span>
tofu init

<span class="c"># Apply the configuration</span>
tofu apply</code></pre></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The <code>main.tf</code> file requires a <code>secrets.yaml</code> file ciphered with <a href="https://getsops.io/" target="_blank" rel="noopener">sops</a>, that is why the script tries
to create it with a generic value if it can’t be found.</p>
<p>Note that the script is assuming that there is a <code>.sops.yaml</code> file on the user HOME directory prepared to cipher secrets
(for development I use an <a href="https://age-encryption.org/" target="_blank" rel="noopener">age</a> key as described on the
<a href="https://getsops.io/docs/#encrypting-using-age" target="_blank" rel="noopener">sops documentation</a> but you can use GPG instead).</p></aside>
<section class="doc-section level-2"><h3 id="_adding_a_wildcard_certificate_to_the_k3d_ingress">Adding a wildcard certificate to the <code>k3d</code> ingress</h3><p>As an optional step, after creating the <code>k3d</code> cluster I’m going to add a default wildcard certificate for the <code>traefik</code>
ingress server to be able to use everything with HTTPS without certificate issues.</p>
<p>As I manage my own DNS domain I’ve created the <code>lo.mixinet.net</code> and <code>*.lo.mixinet.net</code> DNS entries on my public and
private DNS servers (both return <code>127.0.0.1</code> and <code>::1</code>) and I’ve created a TLS certificate for both entries using
<a href="https://letsencrypt.org/" target="_blank" rel="noopener">Let’s Encrypt</a> with <a href="https://certbot.eff.org/" target="_blank" rel="noopener">Certbot</a>.</p>
<p>The certificate is updated automatically on one of my servers and when I need it I copy the contents of the
<code>fullchain.pem</code> and <code>privkey.pem</code> files from the <code>/etc/letsencrypt/live/lo.mixinet.net</code> server directory to the local
files <code>lo.mixinet.net.crt</code> and <code>lo.mixinet.net.key</code>.</p>
<p>After copying the files I run the following file to install or update the certificate and configure it as the default
for <code>traefik</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="c"># Script to update the</span>
<span class="nv">secret</span><span class="o">=</span><span class="s2">&#34;lo-mixinet-net-ingress-cert&#34;</span>
<span class="nv">cert</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">1</span><span class="k">:-</span><span class="nv">lo</span><span class="p">.mixinet.net.crt</span><span class="k">}</span><span class="s2">&#34;</span>
<span class="nv">key</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">2</span><span class="k">:-</span><span class="nv">lo</span><span class="p">.mixinet.net.key</span><span class="k">}</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$cert</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$key</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>kubectl <span class="nt">-n</span> kube-system create secret tls <span class="nv">$secret</span> <span class="se">\</span>
    <span class="nt">--key</span><span class="o">=</span><span class="nv">$key</span> <span class="se">\</span>
    <span class="nt">--cert</span><span class="o">=</span><span class="nv">$cert</span> <span class="se">\</span>
    <span class="nt">--dry-run</span><span class="o">=</span>client <span class="nt">--save-config</span> <span class="nt">-o</span> yaml  | kubectl apply <span class="nt">-f</span> -
  kubectl apply <span class="nt">-f</span> - <span class="o">&lt;&lt;</span> <span class="no">EOF</span><span class="sh">
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
  name: default
  namespace: kube-system

spec:
  defaultCertificate:
    secretName: </span><span class="nv">$secret</span><span class="sh">
</span><span class="no">EOF
</span><span class="k">else
  </span><span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
To add or update the traefik TLS certificate the following files are needed:

- cert: &#39;</span><span class="nv">$cert</span><span class="sh">&#39;
- key: &#39;</span><span class="nv">$key</span><span class="sh">&#39;

Note: you can pass the paths as arguments to this script.
</span><span class="no">EOF
</span><span class="k">fi</span></code></pre></div>
<p>Once it is installed if I connect to <a class="bare" href="https://foo.lo.mixinet.net:8443/">https://foo.lo.mixinet.net:8443/</a> I get a 404 but the certificate is valid.</p></section></section>
<section class="doc-section level-1"><h2 id="_installing_argocd_with_argocd_autopilot">Installing <code>argocd</code> with <code>argocd-autopilot</code></h2><section class="doc-section level-2"><h3 id="_creating_a_repository_and_a_token_for_autopilot">Creating a repository and a token for <code>autopilot</code></h3><p>I’ll be using a project on my forgejo instance to manage <code>argocd</code>, the repository I’ve created is on the URL
<a class="bare" href="https://forgejo.mixinet.net/blogops/argocd">https://forgejo.mixinet.net/blogops/argocd</a> and I’ve created a private user named <code>argocd</code> that only has write access to
that repository.</p>
<p>Logging as the <code>argocd</code> user on forgejo I’ve created a token with permission to read and write repositories that I’ve
saved on my <a href="https://www.passwordstore.org/" target="_blank" rel="noopener">pass</a> password store on the <code>mixinet.net/argocd@forgejo/repository-write</code>
entry.</p></section>
<section class="doc-section level-2"><h3 id="_bootstrapping_the_installation">Bootstrapping the installation</h3><p>To bootstrap the installation I’ve used the following script (it uses the previous <code>GIT_REPO</code> and <code>GIT_TOKEN</code> values):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># VARIABLES</span>
<span class="c"># Relative PATH to the workdir from the script directory</span>
<span class="nv">WORK_DIR_RELPATH</span><span class="o">=</span><span class="s2">&#34;..&#34;</span>

<span class="c"># Compute WORKDIR</span>
<span class="nv">SCRIPT</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">readlink</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$0</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">SCRIPT_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">dirname</span> <span class="s2">&#34;</span><span class="nv">$SCRIPT</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">WORK_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">readlink</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$SCRIPT_DIR</span><span class="s2">/</span><span class="nv">$WORK_DIR_RELPATH</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>

<span class="c"># Update the PATH to add the arkade bin directory</span>
<span class="c"># Add the arkade binary directory to the path if missing</span>
<span class="k">case</span> <span class="s2">&#34;:</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:&#34;</span> <span class="k">in</span>
  <span class="k">*</span>:<span class="s2">&#34;</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span>:<span class="k">*</span><span class="p">)</span> <span class="p">;;</span>
  <span class="k">*</span><span class="p">)</span> <span class="nb">export </span><span class="nv">PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">PATH</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">HOME</span><span class="k">}</span><span class="s2">/.arkade/bin&#34;</span> <span class="p">;;</span>
<span class="k">esac</span>

<span class="c"># Go to the working directory</span>
<span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$WORK_DIR</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">exit </span>1

<span class="c"># Set GIT variables</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$GIT_REPO</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">export </span><span class="nv">GIT_REPO</span><span class="o">=</span><span class="s2">&#34;https://forgejo.mixinet.net/blogops/argocd.git&#34;</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$GIT_TOKEN</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nv">GIT_TOKEN</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>pass mixinet.net/argocd@forgejo/repository-write<span class="si">)</span><span class="s2">&#34;</span>
  <span class="nb">export </span>GIT_TOKEN
<span class="k">fi

</span>argocd-autopilot repo bootstrap <span class="nt">--provider</span> gitea</code></pre></div>
<p>The output of the execution is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">❯ bin/argocd-bootstrap.sh
INFO cloning repo: https://forgejo.mixinet.net/blogops/argocd.git
INFO empty repository, initializing a new one with specified remote
INFO using revision: <span class="s2">&#34;&#34;</span>, installation path: <span class="s2">&#34;&#34;</span>
INFO using context: <span class="s2">&#34;k3d-argocd&#34;</span>, namespace: <span class="s2">&#34;argocd&#34;</span>
INFO applying bootstrap manifests to cluster...
namespace/argocd created
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
secret/autopilot-secret created

INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
application.argoproj.io/autopilot-bootstrap created
INFO running argocd login to initialize argocd config
Context <span class="s1">&#39;autopilot&#39;</span> updated

INFO argocd initialized. password: XXXXXXX-XXXXXXXX
INFO run:

    kubectl port-forward <span class="nt">-n</span> argocd svc/argocd-server 8080:80</code></pre></div>
<p>Now we have the <code>argocd</code> installed and running, it can be checked using the <code>port-forward</code> and connecting to
<a class="bare" href="https://localhost:8080/">https://localhost:8080/</a> (the certificate will be wrong, we are going to fix that in the next step).</p></section></section>
<section class="doc-section level-1"><h2 id="_updating_the_argocd_installation_in_git">Updating the <code>argocd</code> installation in <code>git</code></h2><p>Now that we have the application deployed we can clone the <code>argocd</code> repository and edit the deployment to disable TLS
for the argocd server (we are going to use TLS termination with <code>traefik</code> and that needs the server running as insecure,
see the Argo CD <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30" target="_blank" rel="noopener">documentation</a>)</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ ssh clone ssh://git@forgejo.mixinet.net/blogops/argocd.git
❯ cd argocd
❯ edit bootstrap/argo-cd/kustomization.yaml
❯ git commit -m &#39;Disable TLS for the argocd-server&#39;</span></code></pre></div>
<p>The changes made to the <code>kustomization.yaml</code> file are the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="gd">--- a/bootstrap/argo-cd/kustomization.yaml
</span><span class="gi">+++ b/bootstrap/argo-cd/kustomization.yaml
</span><span class="p">@@ -11,6 +11,11 @@</span> configMapGenerator:
         key: git_username
         name: autopilot-secret
   name: argocd-cm
<span class="gi">+  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
+- behavior: merge
+  literals:
+  - &#34;server.insecure=true&#34;
+  name: argocd-cmd-params-cm
</span> kind: Kustomization
 namespace: argocd
 resources:</code></pre></div>
<p>Once the changes are pushed we sync the <code>argo-cd</code> application manually to make sure they are applied:</p>
<div class="image-block"><img src="/images/gitops/argo-cd-sync.png" alt="argo cd sync"/></div>
<p>As a test we can download the <code>argocd-cmd-params-cm</code> ConfigMap to make sure everything is OK:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">data</span><span class="pi">:</span>
  <span class="na">server.insecure</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">true&#34;</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">ConfigMap</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">annotations</span><span class="pi">:</span>
    <span class="na">kubectl.kubernetes.io/last-applied-configuration</span><span class="pi">:</span> <span class="pi">|</span>
      <span class="s">{&#34;apiVersion&#34;:&#34;v1&#34;,&#34;data&#34;:{&#34;server.insecure&#34;:&#34;true&#34;},&#34;kind&#34;:&#34;ConfigMap&#34;,&#34;metadata&#34;:{&#34;annotations&#34;:{},&#34;labels&#34;:{&#34;app.kubernetes.io/instance&#34;:&#34;argo-cd&#34;,&#34;app.kubernetes.io/name&#34;:&#34;argocd-cmd-params-cm&#34;,&#34;app.kubernetes.io/part-of&#34;:&#34;argocd&#34;},&#34;name&#34;:&#34;argocd-cmd-params-cm&#34;,&#34;namespace&#34;:&#34;argocd&#34;}}</span>
  <span class="na">creationTimestamp</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">2025-04-27T17:31:54Z&#34;</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/instance</span><span class="pi">:</span> <span class="s">argo-cd</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">argocd-cmd-params-cm</span>
    <span class="na">app.kubernetes.io/part-of</span><span class="pi">:</span> <span class="s">argocd</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">argocd-cmd-params-cm</span>
  <span class="na">namespace</span><span class="pi">:</span> <span class="s">argocd</span>
  <span class="na">resourceVersion</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">16731&#34;</span>
  <span class="na">uid</span><span class="pi">:</span> <span class="s">a460638f-1d82-47f6-982c-3017699d5f14</span></code></pre></div>
<p>As this simply changes the <code>ConfigMap</code> we have to restart the <code>argocd-server</code> to read it again, to do it we delete the
server pods so they are re-created using the updated resource:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ kubectl delete pods -n argocd -l app.kubernetes.io/name=argocd-server</span></code></pre></div>
<p>After doing this the <code>port-forward</code> command is killed automatically, if we run it again the connection to get to the
<code>argocd-server</code> has to be done using HTTP instead of HTTPS.</p>
<p>Instead of testing that we are going to add an ingress definition to be able to connect to the server using HTTPS and
GRPC against the address <code>argocd.lo.mixinet.net</code> using the wildcard TLS certificate we installed earlier.</p>
<p>To do it we to edit the <code>bootstrap/argo-cd/kustomization.yaml</code> file to add the <code>ingress_route.yaml</code> file to the
deployment:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="gd">--- a/bootstrap/argo-cd/kustomization.yaml
</span><span class="gi">+++ b/bootstrap/argo-cd/kustomization.yaml
</span><span class="p">@@ -20,3 +20,4 @@</span> kind: Kustomization
 namespace: argocd
 resources:
 - github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
<span class="gi">+- ingress_route.yaml</span></code></pre></div>
<p>The <code>ingress_route.yaml</code> file contents are the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">traefik.io/v1alpha1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">IngressRoute</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">argocd-server</span>
  <span class="na">namespace</span><span class="pi">:</span> <span class="s">argocd</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">entryPoints</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">websecure</span>
  <span class="na">routes</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="na">kind</span><span class="pi">:</span> <span class="s">Rule</span>
      <span class="na">match</span><span class="pi">:</span> <span class="s">Host(`argocd.lo.mixinet.net`)</span>
      <span class="na">priority</span><span class="pi">:</span> <span class="m">10</span>
      <span class="na">services</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">argocd-server</span>
          <span class="na">port</span><span class="pi">:</span> <span class="m">80</span>
    <span class="pi">-</span> <span class="na">kind</span><span class="pi">:</span> <span class="s">Rule</span>
      <span class="na">match</span><span class="pi">:</span> <span class="s">Host(`argocd.lo.mixinet.net`) &amp;&amp; Header(`Content-Type`, `application/grpc`)</span>
      <span class="na">priority</span><span class="pi">:</span> <span class="m">11</span>
      <span class="na">services</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">argocd-server</span>
          <span class="na">port</span><span class="pi">:</span> <span class="m">80</span>
          <span class="na">scheme</span><span class="pi">:</span> <span class="s">h2c</span>
  <span class="na">tls</span><span class="pi">:</span>
    <span class="na">certResolver</span><span class="pi">:</span> <span class="s">default</span></code></pre></div>
<p>After pushing the changes and waiting a little bit the change is applied and we can access the server using HTTPS and
GRPC, the first way can be tested from a browser and the GRPC using the command line interface:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">❯ argocd --grpc-web login argocd.lo.mixinet.net:8443
Username: admin
Password:
&#39;admin:login&#39; logged in successfully
Context &#39;argocd.lo.mixinet.net:8443&#39; updated
❯ argocd app list -o name
argocd/argo-cd
argocd/autopilot-bootstrap
argocd/cluster-resources-in-cluster
argocd/root</span></code></pre></div>
<p>So things are working fine …​ and that is all on this post, folks!</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using actions to build this site</title>
      <link>https://blogops.mixinet.net/posts/forgejo/using_actions_to_build_this_site/</link>
      <pubDate>Tue, 18 Mar 2025 20:00:00 +0100</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/forgejo/using_actions_to_build_this_site/</guid>
      <description>&lt;p&gt;As promised on my previous &lt;a href=&#34;https://blogops.mixinet.net/posts/forgejo/configuring_forgejo_actions/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;post&lt;/a&gt;, on this entry I’ll explain how I’ve set up forgejo
actions on the source repository of this site to build it using a runner instead of doing it on the public server using
a webhook to trigger the operation.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_setting_up_the_system&#34;&gt;Setting up the system&lt;/h2&gt;&lt;p&gt;The first thing I’ve done is to disable the forgejo webhook call that was used to publish the site, as I don’t want to
run it anymore.&lt;/p&gt;
&lt;aside class=&#34;admonition-block note&#34; role=&#34;note&#34;&gt;&lt;h6 class=&#34;block-title label-only&#34;&gt;&lt;span class=&#34;title-label&#34;&gt;Note: &lt;/span&gt;&lt;/h6&gt;&lt;p&gt;For now I’ve just removed the &lt;em&gt;Active&lt;/em&gt; flag from the webhook, just in case I want to use it again in the future.&lt;/p&gt;
&lt;p&gt;I’ve left the system based on the &lt;code&gt;json2file&lt;/code&gt; server running as it does nothing if no webhook is called, if we want
to use it again it would be a good idea to disable actions to avoid conflicts if something is pushed, but it can be
executed manually if needed and nothing will break, as both systems use the same directories to publish things).&lt;/p&gt;&lt;/aside&gt;
&lt;p&gt;After that I added a new workflow to the repository that does the following things:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;build the site using my &lt;a href=&#34;https://forgejo.mixinet.net/oci/images/src/branch/main/hugo-adoc/Dockerfile&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;hugo-adoc&lt;/a&gt; image.&lt;/li&gt;&lt;li&gt;push the result to a branch that contains the generated site (we do this because the server is already configured
to work with the git repository and we can use force pushes to keep only the last version of the site, removing the
need of extra code to manage package uploads and removals).&lt;/li&gt;&lt;li&gt;uses &lt;code&gt;curl&lt;/code&gt; to send a notification to an instance of the &lt;a href=&#34;https://github.com/adnanh/webhook&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;webhook&lt;/a&gt; server installed on the
remote server that triggers a script that updates the site using the git branch.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>As promised on my previous <a href="https://blogops.mixinet.net/posts/forgejo/configuring_forgejo_actions/" target="_blank" rel="noopener">post</a>, on this entry I’ll explain how I’ve set up forgejo
actions on the source repository of this site to build it using a runner instead of doing it on the public server using
a webhook to trigger the operation.</p>
<section class="doc-section level-1"><h2 id="_setting_up_the_system">Setting up the system</h2><p>The first thing I’ve done is to disable the forgejo webhook call that was used to publish the site, as I don’t want to
run it anymore.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>For now I’ve just removed the <em>Active</em> flag from the webhook, just in case I want to use it again in the future.</p>
<p>I’ve left the system based on the <code>json2file</code> server running as it does nothing if no webhook is called, if we want
to use it again it would be a good idea to disable actions to avoid conflicts if something is pushed, but it can be
executed manually if needed and nothing will break, as both systems use the same directories to publish things).</p></aside>
<p>After that I added a new workflow to the repository that does the following things:</p>
<div class="ulist"><ul><li>build the site using my <a href="https://forgejo.mixinet.net/oci/images/src/branch/main/hugo-adoc/Dockerfile" target="_blank" rel="noopener">hugo-adoc</a> image.</li><li>push the result to a branch that contains the generated site (we do this because the server is already configured
to work with the git repository and we can use force pushes to keep only the last version of the site, removing the
need of extra code to manage package uploads and removals).</li><li>uses <code>curl</code> to send a notification to an instance of the <a href="https://github.com/adnanh/webhook" target="_blank" rel="noopener">webhook</a> server installed on the
remote server that triggers a script that updates the site using the git branch.</li></ul></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>As explained on my original post on the <a href="https://blogops.mixinet.net/posts/new_blog_config/#_production_setup" target="_blank" rel="noopener">production setup</a> section, I could have used the
<code>json2file</code> server to process the notification, but I’ve decided to use the <code>webhook</code> server instead because it is
simpler to set up and allows me to show a different approach to the same problem.</p>
<p>The script that updates the site is executed by a user that has permissions to clone the repository using a ssh key.</p></aside></section>
<section class="doc-section level-1"><h2 id="_setting_up_the_webhook_service">Setting up the <code>webhook</code> service</h2><p>On the server machine we have installed and configured the <code>webhook</code> service to run a script that updates the site.</p>
<p>To install the application and setup the configuration we have used the following script:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>
<span class="nv">ARCH</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>dpkg <span class="nt">--print-architecture</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_VERSION</span><span class="o">=</span><span class="s2">&#34;2.8.2&#34;</span>
<span class="nv">DOWNLOAD_URL</span><span class="o">=</span><span class="s2">&#34;https://github.com/adnanh/webhook/releases/download&#34;</span>
<span class="nv">WEBHOOK_TGZ_URL</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$DOWNLOAD_URL</span><span class="s2">/</span><span class="nv">$WEBHOOK_VERSION</span><span class="s2">/webhook-linux-</span><span class="nv">$ARCH</span><span class="s2">.tar.gz&#34;</span>
<span class="nv">WEBHOOK_SERVICE_NAME</span><span class="o">=</span><span class="s2">&#34;webhook&#34;</span>
<span class="c"># Files</span>
<span class="nv">WEBHOOK_SERVICE_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/systemd/system/</span><span class="nv">$WEBHOOK_SERVICE_NAME</span><span class="s2">.service&#34;</span>
<span class="nv">WEBHOOK_SOCKET_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/systemd/system/</span><span class="nv">$WEBHOOK_SERVICE_NAME</span><span class="s2">.socket&#34;</span>
<span class="nv">WEBHOOK_TML_TEMPLATE</span><span class="o">=</span><span class="s2">&#34;/srv/blogops/action/webhook.yml.envsubst&#34;</span>
<span class="nv">WEBHOOK_YML</span><span class="o">=</span><span class="s2">&#34;/etc/webhook.yml&#34;</span>

<span class="c"># Config file values</span>
<span class="nv">WEBHOOK_USER</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-u</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_GROUP</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-g</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_LISTEN_STREAM</span><span class="o">=</span><span class="s2">&#34;172.31.31.1:4444&#34;</span>

<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>

<span class="c"># Install binary from releases (on Debian only version 2.8.0 is available, but</span>
<span class="c"># I need the 2.8.2 version to support the systemd activation mode).</span>

curl <span class="nt">-fsSL</span> <span class="nt">-o</span> <span class="s2">&#34;/tmp/webhook.tgz&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TGZ_URL</span><span class="s2">&#34;</span>
<span class="nb">tar</span> <span class="nt">-C</span> /tmp <span class="nt">-xzf</span> /tmp/webhook.tgz
<span class="nb">sudo install</span> <span class="nt">-m</span> 755 <span class="s2">&#34;/tmp/webhook-linux-</span><span class="nv">$ARCH</span><span class="s2">/webhook&#34;</span> /usr/local/bin/webhook
<span class="nb">rm</span> <span class="nt">-rf</span> <span class="s2">&#34;/tmp/webhook-linux-</span><span class="nv">$ARCH</span><span class="s2">&#34;</span> /tmp/webhook.tgz

<span class="c"># Service file</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt;&#39;</span><span class="nv">$WEBHOOK_SERVICE_FILE</span><span class="s2">&#39;&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
[Unit]
Description=Webhook server
[Service]
Type=exec
ExecStart=webhook -nopanic -hooks </span><span class="nv">$WEBHOOK_YML</span><span class="sh">
User=</span><span class="nv">$WEBHOOK_USER</span><span class="sh">
Group=</span><span class="nv">$WEBHOOK_GROUP</span><span class="sh">
</span><span class="no">EOF

</span><span class="c"># Socket config</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt;&#39;</span><span class="nv">$WEBHOOK_SOCKET_FILE</span><span class="s2">&#39;&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
[Unit]
Description=Webhook server socket
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to the IP and port you want to listen on
ListenStream=</span><span class="nv">$WEBHOOK_LISTEN_STREAM</span><span class="sh">
[Install]
WantedBy=multi-user.target
</span><span class="no">EOF

</span><span class="c"># Config file</span>
<span class="nv">BLOGOPS_TOKEN</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>uuid<span class="si">)</span><span class="s2">&#34;</span> <span class="se">\</span>
  envsubst &lt;<span class="s2">&#34;</span><span class="nv">$WEBHOOK_TML_TEMPLATE</span><span class="s2">&#34;</span> | <span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt;</span><span class="nv">$WEBHOOK_YML</span><span class="s2">&#34;</span>
<span class="nb">chmod </span>0640 <span class="s2">&#34;</span><span class="nv">$WEBHOOK_YML</span><span class="s2">&#34;</span>
chwon <span class="s2">&#34;</span><span class="nv">$WEBHOOK_USER</span><span class="s2">:</span><span class="nv">$WEBHOOK_GROUP</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_YML</span><span class="s2">&#34;</span>

<span class="c"># Restart and enable service</span>
<span class="nb">sudo </span>systemctl daemon-reload
<span class="nb">sudo </span>systemctl stop <span class="s2">&#34;</span><span class="nv">$WEBHOOK_SERVICE_NAME</span><span class="s2">.socket&#34;</span>
<span class="nb">sudo </span>systemctl start <span class="s2">&#34;</span><span class="nv">$WEBHOOK_SERVICE_NAME</span><span class="s2">.socket&#34;</span>
<span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_SERVICE_NAME</span><span class="s2">.socket&#34;</span>

<span class="c"># ----</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></div>
<p>As seen on the code, we’ve installed the application using a binary from the project repository instead of a package
because we needed the latest version of the application to use <code>systemd</code> with socket activation.</p>
<p>The configuration file template is the following one:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="pi">-</span> <span class="na">id</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">update-blogops&#34;</span>
  <span class="na">execute-command</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">/srv/blogops/action/bin/update-blogops.sh&#34;</span>
  <span class="na">command-working-directory</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">/srv/blogops&#34;</span>
  <span class="na">trigger-rule</span><span class="pi">:</span>
    <span class="na">match</span><span class="pi">:</span>
      <span class="na">type</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">value&#34;</span>
      <span class="na">value</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">$BLOGOPS_TOKEN&#34;</span>
      <span class="na">parameter</span><span class="pi">:</span>
        <span class="na">source</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">header&#34;</span>
        <span class="na">name</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">X-Blogops-Token&#34;</span></code></pre></div>
<p>The version on <code>/etc/webhook.yml</code> has the <code>BLOGOPS_TOKEN</code> adjusted to a random value that has to exported as a secret on
the forgejo project (see later).</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>As we are going to be connecting to this server using a VPN we don’t need to enable TLS on the <code>webhook</code> server, but if
we want to do it we just need to change the <code>/etc/systemd/system/webhook.service</code> file to include the <code>-secure</code>, <code>-cert</code>
and <code>-key</code> options with the right paths to the certificate and key files on the <code>ExecStart</code> line:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="ini"><span class="nn">[Service]</span>
<span class="py">ExecStart</span><span class="p">=</span><span class="s">webhook -nopanic -hooks /etc/webhook.yaml </span><span class="se">\
</span>  <span class="s">-secure -cert PATH_TO_WEBHOOK_CRT -key PATH_TO_WEBHOOK_KEY</span></code></pre></div></aside>
<p>Once the service is started each time the action is executed the <code>webhook</code> daemon will get a notification and will run
the following <code>update-blogops.sh</code> script to publish the updated version of the site:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>

<span class="c"># Values</span>
<span class="nv">REPO_URL</span><span class="o">=</span><span class="s2">&#34;ssh://git@forgejo.mixinet.net/mixinet/blogops.git&#34;</span>
<span class="nv">REPO_BRANCH</span><span class="o">=</span><span class="s2">&#34;html&#34;</span>
<span class="nv">REPO_DIR</span><span class="o">=</span><span class="s2">&#34;public&#34;</span>

<span class="nv">MAIL_PREFIX</span><span class="o">=</span><span class="s2">&#34;[BLOGOPS-UPDATE-ACTION] &#34;</span>
<span class="c"># Address that gets all messages, leave it empty if not wanted</span>
<span class="nv">MAIL_TO_ADDR</span><span class="o">=</span><span class="s2">&#34;blogops@mixinet.net&#34;</span>

<span class="c"># Directories</span>
<span class="nv">BASE_DIR</span><span class="o">=</span><span class="s2">&#34;/srv/blogops&#34;</span>

<span class="nv">PUBLIC_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/</span><span class="nv">$REPO_DIR</span><span class="s2">&#34;</span>
<span class="nv">NGINX_BASE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/nginx&#34;</span>
<span class="nv">PUBLIC_HTML_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$NGINX_BASE_DIR</span><span class="s2">/public_html&#34;</span>

<span class="nv">ACTION_BASE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/action&#34;</span>
<span class="nv">ACTION_LOG_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$ACTION_BASE_DIR</span><span class="s2">/log&#34;</span>

<span class="c"># Files</span>
<span class="nv">OUTPUT_BASENAME</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> +%Y%m%d-%H%M%S.%N<span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">ACTION_LOGFILE_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOG_DIR</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.log&#34;</span>

<span class="c"># ---------</span>
<span class="c"># Functions</span>
<span class="c"># ---------</span>

action_log<span class="o">()</span> <span class="o">{</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> <span class="nt">-R</span><span class="si">)</span><span class="s2"> </span><span class="nv">$*</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span>
<span class="o">}</span>

action_check_directories<span class="o">()</span> <span class="o">{</span>
  <span class="k">for </span>_d <span class="k">in</span> <span class="s2">&#34;</span><span class="nv">$ACTION_BASE_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$ACTION_LOG_DIR</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">do</span>
    <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span>
  <span class="k">done</span>
<span class="o">}</span>

action_clean_directories<span class="o">()</span> <span class="o">{</span>
  <span class="c"># Try to remove empty dirs</span>
  <span class="k">for </span>_d <span class="k">in</span> <span class="s2">&#34;</span><span class="nv">$ACTION_LOG_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$ACTION_BASE_DIR</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">do
    if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
      </span><span class="nb">rmdir</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span> 2&gt;/dev/null <span class="o">||</span> <span class="nb">true
    </span><span class="k">fi
  done</span>
<span class="o">}</span>

mail_success<span class="o">()</span> <span class="o">{</span>
  <span class="nv">to_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$MAIL_TO_ADDR</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">subject</span><span class="o">=</span><span class="s2">&#34;OK - updated blogops site&#34;</span>
    mail <span class="nt">-s</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">MAIL_PREFIX</span><span class="k">}${</span><span class="nv">subject</span><span class="k">}</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> &lt;<span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span>

mail_failure<span class="o">()</span> <span class="o">{</span>
  <span class="nv">to_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$MAIL_TO_ADDR</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">subject</span><span class="o">=</span><span class="s2">&#34;KO - failed to update blogops site&#34;</span>
    mail <span class="nt">-s</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">MAIL_PREFIX</span><span class="k">}${</span><span class="nv">subject</span><span class="k">}</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> &lt;<span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span>
  <span class="k">fi
  </span><span class="nb">exit </span>1
<span class="o">}</span>

<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>

<span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;0&#34;</span>

<span class="c"># Check directories</span>
action_check_directories

<span class="c"># Go to the base directory</span>
<span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">&#34;</span>

<span class="c"># Remove the old build dir if present</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">rm</span> <span class="nt">-rf</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span>
<span class="k">fi</span>

<span class="c"># Update the repository checkout</span>
action_log <span class="s2">&#34;Updating the repository checkout&#34;</span>
git fetch <span class="nt">--all</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>action_log <span class="s2">&#34;Failed to update the repository checkout&#34;</span>
  mail_failure
<span class="k">fi</span>

<span class="c"># Get it from the repo branch &amp; extract it</span>
action_log <span class="s2">&#34;Downloading and extracting last site version using &#39;git archive&#39;&#34;</span>
git archive <span class="nt">--remote</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$REPO_URL</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$REPO_BRANCH</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$REPO_DIR</span><span class="s2">&#34;</span> <span class="se">\</span>
  | <span class="nb">tar </span>xf - <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>

<span class="c"># Fail if public dir was missing</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>action_log <span class="s2">&#34;Failed to download or extract site&#34;</span>
  mail_failure
<span class="k">fi</span>

<span class="c"># Remove old public_html copies</span>
action_log <span class="s1">&#39;Removing old site versions, if present&#39;</span>
find <span class="nv">$NGINX_BASE_DIR</span> <span class="nt">-mindepth</span> 1 <span class="nt">-maxdepth</span> 1 <span class="nt">-name</span> <span class="s1">&#39;public_html-*&#39;</span> <span class="nt">-type</span> d <span class="se">\</span>
  <span class="nt">-exec</span> <span class="nb">rm</span> <span class="nt">-rf</span> <span class="o">{}</span> <span class="se">\;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>action_log <span class="s2">&#34;Removal of old site versions failed&#34;</span>
  mail_failure
<span class="k">fi</span>
<span class="c"># Switch site directory</span>
<span class="nv">TS</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> +%Y%m%d-%H%M%S<span class="si">)</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>action_log <span class="s2">&#34;Moving &#39;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#39; to &#39;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">-</span><span class="nv">$TS</span><span class="s2">&#39;&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">-</span><span class="nv">$TS</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span>
    <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>action_log <span class="s2">&#34;Moving &#39;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#39; to &#39;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#39;&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$ACTION_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span>
    <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>action_log <span class="s2">&#34;Site switch failed&#34;</span>
  mail_failure
<span class="k">else
  </span>action_log <span class="s2">&#34;Site updated successfully&#34;</span>
  mail_success
<span class="k">fi</span>

<span class="c"># ----</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_the_hugo_adoc_workflow">The <code>hugo-adoc</code> workflow</h2><p>The workflow is defined in the <code>.forgejo/workflows/hugo-adoc.yml</code> file and looks like this:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">name</span><span class="pi">:</span> <span class="s">hugo-adoc</span>

<span class="c1"># Run this job on push events to the main branch</span>
<span class="na">on</span><span class="pi">:</span>
  <span class="na">push</span><span class="pi">:</span>
    <span class="na">branches</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">main&#39;</span>

<span class="na">jobs</span><span class="pi">:</span>
  <span class="na">build-and-push</span><span class="pi">:</span>
    <span class="na">if</span><span class="pi">:</span> <span class="s">${{ vars.BLOGOPS_WEBHOOK_URL != &#39;&#39; &amp;&amp; secrets.BLOGOPS_TOKEN != &#39;&#39; }}</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">docker</span>
    <span class="na">container</span><span class="pi">:</span>
      <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/hugo-adoc:latest</span>
    <span class="c1"># Allow the job to write to the repository (not really needed on forgejo)</span>
    <span class="na">permissions</span><span class="pi">:</span>
      <span class="na">contents</span><span class="pi">:</span> <span class="s">write</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout the repo</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">submodules</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">true&#39;</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Build the site</span>
        <span class="na">shell</span><span class="pi">:</span> <span class="s">sh</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">rm -rf public</span>
          <span class="s">hugo</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Push compiled site to html branch</span>
        <span class="na">shell</span><span class="pi">:</span> <span class="s">sh</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s"># Set the git user</span>
          <span class="s">git config --global user.email &#34;blogops@mixinet.net&#34;</span>
          <span class="s">git config --global user.name &#34;BlogOps&#34;</span>
          <span class="s"># Create a new orphan branch called html (it was not pulled by the</span>
          <span class="s"># checkout step)</span>
          <span class="s">git switch --orphan html</span>
          <span class="s"># Add the public directory to the branch</span>
          <span class="s">git add public</span>
          <span class="s"># Commit the changes</span>
          <span class="s">git commit --quiet -m &#34;Updated site @ $(date -R)&#34; public</span>
          <span class="s"># Push the changes to the html branch</span>
          <span class="s">git push origin html --force</span>
          <span class="s"># Switch back to the main branch</span>
          <span class="s">git switch main</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Call the blogops update webhook endpoint</span>
        <span class="na">shell</span><span class="pi">:</span> <span class="s">sh</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">HEADER=&#34;X-Blogops-Token: ${{ secrets.BLOGOPS_TOKEN }}&#34;</span>
          <span class="s">curl --fail -k -H &#34;$HEADER&#34; ${{ vars.BLOGOPS_WEBHOOK_URL }}</span></code></pre></div>
<p>The only relevant thing is that we have to add the <code>BLOGOPS_TOKEN</code> variable to the project secrets (its value is the one
included on the <code>/etc/webhook.yml</code> file created when installing the <code>webhook</code> service) and the <code>BLOGOPS_WEBHOOK_URL</code>
project variable (its value is the URL of the <code>webhook</code> server, in my case
<code><a class="bare" href="http://172.31.31.1:4444/hooks/update-blogops">http://172.31.31.1:4444/hooks/update-blogops</a></code>); note that the job includes the <code>-k</code> flag on the <code>curl</code> command just in
case I end up using TLS on the <code>webhook</code> server in the future, as discussed previously.</p></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>Now that I have forgejo actions on my server I no longer need to build the site on the public server as I did initially,
a good thing when the server is a small OVH VPS that only runs a couple of containers and a web server directly on the
host.</p>
<p>I’m still using a notification system to make the server run a script to update the site because that way the forgejo
server does not need access to the remote machine shell, only the <code>webhook</code> server which, IMHO, is a more secure setup.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Configuring forgejo actions</title>
      <link>https://blogops.mixinet.net/posts/forgejo/configuring_forgejo_actions/</link>
      <pubDate>Mon, 17 Mar 2025 20:00:00 +0100</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/forgejo/configuring_forgejo_actions/</guid>
      <description>&lt;p&gt;Last week I decided I wanted to try out &lt;a href=&#34;https://forgejo.org/docs/latest/admin/actions/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;forgejo actions&lt;/a&gt; to build this blog instead of using
&lt;em&gt;webhooks&lt;/em&gt;, so I looked the documentation and started playing with it until I had it working as I wanted.&lt;/p&gt;
&lt;p&gt;This post is to describe how I’ve installed and configured a &lt;a href=&#34;https://forgejo.org/docs/latest/admin/runner-installation/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;forgejo runner&lt;/a&gt;, how I’ve added an
&lt;a href=&#34;https://forgejo.mixinet.net/oci/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;oci&lt;/a&gt; organization to my instance to build, publish and mirror container images and added a couple of
additional organizations (&lt;a href=&#34;https://forgejo.mixinet.net/actions/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;actions&lt;/a&gt; and &lt;a href=&#34;https://forgejo.mixinet.net/docker/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;docker&lt;/a&gt; for now) to mirror interesting
actions.&lt;/p&gt;
&lt;p&gt;The changes made to build the site using actions will be documented on a separate post, as I’ll be using this entry to
test the new setup on the blog project.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_installing_the_runner&#34;&gt;Installing the runner&lt;/h2&gt;&lt;p&gt;The first thing I’ve done is to install a runner on my server, I decided to use the
&lt;a href=&#34;https://forgejo.org/docs/latest/admin/runner-installation/#oci-image-installation&#34;&gt;OCI image installation method&lt;/a&gt;, as it seemed to be the easiest and fastest
one.&lt;/p&gt;
&lt;p&gt;The commands I’ve used to setup the runner are the following:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;nv&#34;&gt;$ &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;cd&lt;/span&gt; /srv
&lt;span class=&#34;nv&#34;&gt;$ &lt;/span&gt;git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
&lt;span class=&#34;nv&#34;&gt;$ &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;cd &lt;/span&gt;forgejo-runner
&lt;span class=&#34;nv&#34;&gt;$ &lt;/span&gt;sh ./bin/setup-runner.sh&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>Last week I decided I wanted to try out <a href="https://forgejo.org/docs/latest/admin/actions/" target="_blank" rel="noopener">forgejo actions</a> to build this blog instead of using
<em>webhooks</em>, so I looked the documentation and started playing with it until I had it working as I wanted.</p>
<p>This post is to describe how I’ve installed and configured a <a href="https://forgejo.org/docs/latest/admin/runner-installation/" target="_blank" rel="noopener">forgejo runner</a>, how I’ve added an
<a href="https://forgejo.mixinet.net/oci/" target="_blank" rel="noopener">oci</a> organization to my instance to build, publish and mirror container images and added a couple of
additional organizations (<a href="https://forgejo.mixinet.net/actions/" target="_blank" rel="noopener">actions</a> and <a href="https://forgejo.mixinet.net/docker/" target="_blank" rel="noopener">docker</a> for now) to mirror interesting
actions.</p>
<p>The changes made to build the site using actions will be documented on a separate post, as I’ll be using this entry to
test the new setup on the blog project.</p>
<section class="doc-section level-1"><h2 id="_installing_the_runner">Installing the runner</h2><p>The first thing I’ve done is to install a runner on my server, I decided to use the
<a href="https://forgejo.org/docs/latest/admin/runner-installation/#oci-image-installation">OCI image installation method</a>, as it seemed to be the easiest and fastest
one.</p>
<p>The commands I’ve used to setup the runner are the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">$ </span><span class="nb">cd</span> /srv
<span class="nv">$ </span>git clone https://forgejo.mixinet.net/blogops/forgejo-runner.git
<span class="nv">$ </span><span class="nb">cd </span>forgejo-runner
<span class="nv">$ </span>sh ./bin/setup-runner.sh</code></pre></div>
<p>The <code>setup-runner.sh</code> script does multiple things:</p>
<div class="ulist"><ul><li>create a <code>forgejo-runner</code> user and group</li><li>create the necessary directories for the runner</li><li>create a <code>.runner</code> file with a predefined secret and the docker label</li></ul></div>
<p>The <code>setup-runner.sh</code> code is available <a href="https://forgejo.mixinet.net/blogops/forgejo-runner//src/branch/main/bin/setup-runner.sh" target="_blank" rel="noopener">here</a>.</p>
<p>After running the script the runner has to be registered with the forgejo server, it can be done using the following
command:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">$ </span>forgejo forgejo-cli actions register <span class="nt">--name</span> <span class="s2">&#34;</span><span class="nv">$RUNNER_NAME</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">--secret</span> <span class="s2">&#34;</span><span class="nv">$FORGEJO_SECRET</span><span class="s2">&#34;</span></code></pre></div>
<p>The <code>RUNNER_NAME</code> variable is defined on the <code>setup-runner.sh</code> script and the <code>FORGEJO_SECRET</code> must match the value used
on the <code>.runner</code> file.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The secret was pre-created on the <code>setup-runner.sh</code> script using <code>openssl</code>, but the runner can also be created using the
forgejo server web interface; in that case the <code>setup-runner.sh</code> script would have to be modified to use the secret
provided by the web interface.</p></aside></section>
<section class="doc-section level-1"><h2 id="_starting_it_with_docker_compose">Starting it with <code>docker-compose</code></h2><p>To launch the runner I’m going to use a <code>docker-compose.yml</code> file that starts two containers, a <code>docker</code> in <code>docker</code>
service to run the containers used by the workflow jobs and another one that runs the <code>forgejo-runner</code> itself.</p>
<p>The initial version used a TCP port to communicate with the <code>dockerd</code> server from the runner, but when I tried to build
images from a workflow I noticed that the containers launched by the <code>runner</code> were not going to be able to execute
another <code>dockerd</code> inside the <code>dind</code> one and, even if they were, it was going to be expensive computationally.</p>
<p>To avoid the issue I modified the <code>dind</code> service to use a unix socket on a shared volume that can be used by the
<code>runner</code> service to communicate with the daemon and also re-shared with the job containers so the <code>dockerd</code> server can
be used from them to build images.</p>
<section class="admonition-block warning" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Warning: </span></h6><p>The use of the same docker server that runs the jobs from them has security implications, but this instance is
for a home server where I am the only user, so I am not worried about it and this way I can save some resources (in
fact, I could use the host docker server directly instead of using a <code>dind</code> service, but just in case I want to run
other containers on the host I prefer to keep the one used for the runner isolated from it).</p>
<p>For those concerned about sharing the same server an alternative would be to launch a second <code>dockerd</code> only for the jobs
(i.e. <code>actions-dind</code>) using the same approach (the volume with its socket will have to be shared with the <code>runner</code>
service so it can be re-shared, but the <code>runner</code> does not need to use it).</p></section>
<p>The final <code>docker-compose.yaml</code> file is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">services</span><span class="pi">:</span>
  <span class="na">dind</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">docker:dind</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">dind&#39;</span>
    <span class="na">privileged</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">true&#39;</span>
    <span class="na">command</span><span class="pi">:</span> <span class="pi">[</span><span class="s1">&#39;</span><span class="s">dockerd&#39;</span><span class="pi">,</span> <span class="s1">&#39;</span><span class="s">-H&#39;</span><span class="pi">,</span> <span class="s1">&#39;</span><span class="s">unix:///dind/docker.sock&#39;</span><span class="pi">,</span> <span class="s1">&#39;</span><span class="s">-G&#39;</span><span class="pi">,</span> <span class="s1">&#39;</span><span class="s">$RUNNER_GID&#39;</span><span class="pi">]</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">unless-stopped&#39;</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./dind:/dind</span>
  <span class="na">runner</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">data.forgejo.org/forgejo/runner:6.2.2&#39;</span>
    <span class="na">links</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">dind</span>
    <span class="na">depends_on</span><span class="pi">:</span>
      <span class="na">dind</span><span class="pi">:</span>
        <span class="na">condition</span><span class="pi">:</span> <span class="s">service_started</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">runner&#39;</span>
    <span class="na">environment</span><span class="pi">:</span>
      <span class="na">DOCKER_HOST</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">unix:///dind/docker.sock&#39;</span>
    <span class="na">user</span><span class="pi">:</span> <span class="s">$RUNNER_UID:$RUNNER_GID</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./config.yaml:/config.yaml</span>
      <span class="pi">-</span> <span class="s">./data:/data</span>
      <span class="pi">-</span> <span class="s">./dind:/dind</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">unless-stopped&#39;</span>
    <span class="na">command</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">/bin/sh</span><span class="nv"> </span><span class="s">-c</span><span class="nv"> </span><span class="s">&#34;sleep</span><span class="nv"> </span><span class="s">5;</span><span class="nv"> </span><span class="s">forgejo-runner</span><span class="nv"> </span><span class="s">daemon</span><span class="nv"> </span><span class="s">-c</span><span class="nv"> </span><span class="s">/config.yaml&#34;&#39;</span></code></pre></div>
<p>There are multiple things to comment about this file:</p>
<div class="olist arabic"><ol class="arabic"><li>The <code>dockerd</code> server is started with the <code>-H unix:///dind/docker.sock</code> flag to use the unix socket to communicate
with the daemon instead of using a TCP port (as said, it is faster and allows us to share the socket with the
containers started by the runner).</li><li>We are running the <code>dockerd</code> daemon with the <code>RUNNER_GID</code> group so the runner can communicate with it (the socket
gets that group which is the same used by the runner).</li><li>The runner container mounts three volumes: the <code>data</code> directory, the <code>dind</code> folder where docker creates the unix
socket and a <code>config.yaml</code> file used by us to change the default runner configuration.</li></ol></div>
<p>The <code>config.yaml</code> file was originally created using the <code>forgejo-runner</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">$ </span>docker run <span class="nt">--rm</span> data.forgejo.org/forgejo/runner:6.2.2 <span class="se">\</span>
    forgejo-runner generate-config <span class="o">&gt;</span> config.yaml</code></pre></div>
<p>The changes to it are minimal, the runner <code>capacity</code> has been increased to <code>2</code> (that allows it to run two jobs at the
same time) and the <code>/dind/docker.sock</code> value has been added to the <code>valid_volumes</code> key to allow the containers launched
by the runner to mount it when needed; the diff against the default version is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="diff"><span class="p">@@ -13,7 +13,8 @@</span>
   # Where to store the registration result.
   file: .runner
   # Execute how many tasks concurrently at the same time.
<span class="gd">-  capacity: 1
</span><span class="gi">+  # STO: Allow 2 concurrent tasks
+  capacity: 2
</span>   # Extra environment variables to run jobs.
   envs:
     A_TEST_ENV_NAME_1: a_test_env_value_1
<span class="p">@@ -87,7 +88,9 @@</span>
   # If you want to allow any volume, please use the following configuration:
   # valid_volumes:
   #   - &#39;**&#39;
<span class="gd">-  valid_volumes: []
</span><span class="gi">+  # STO: Allow to mount the /dind/docker.sock on the containers
+  valid_volumes:
+    - /dind/docker.sock
</span>   # overrides the docker client host with the specified one.
   # If &#34;-&#34; or &#34;&#34;, an available docker host will automatically be found.
   # If &#34;automount&#34;, an available docker host will automatically be found and ...</code></pre></div>
<p>To start the runner we export the <code>RUNNER_UID</code> and <code>RUNNER_GID</code> variables and call <code>docker-compose up</code> to start the
containers on the background:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">$ RUNNER_UID</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-u</span> forgejo-runner<span class="si">)</span><span class="s2">&#34;</span> <span class="nv">RUNNER_GID</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-g</span> forgejo-runner<span class="si">)</span><span class="s2">&#34;</span> <span class="se">\</span>
    docker compose up <span class="nt">-d</span></code></pre></div>
<p>If the server was configured right we are now able to start using actions with this runner.</p></section>
<section class="doc-section level-1"><h2 id="_preparing_the_system_to_run_things_locally">Preparing the system to run things locally</h2><p>To avoid unnecessary network traffic we are going to create a multiple organizations in our forgejo instance to maintain
our own actions and container images and mirror remote ones.</p>
<p>The rationale behind the mirror use is that we reduce a lot the need to connect to remote servers to download the
actions and images, which is good for performance and security reasons.</p>
<p>In fact, we are going to build our own images for some things to install the tools we want without needing to do it over
and over again on the workflow jobs.</p>
<section class="doc-section level-2"><h3 id="_mirrored_actions">Mirrored actions</h3><p>The actions we are mirroring are on the <a href="https://forgejo.mixinet.net/actions/" target="_blank" rel="noopener">actions</a> and <a href="https://forgejo.mixinet.net/docker/" target="_blank" rel="noopener">docker</a> organizations, we have
created the following ones for now (the mirrors were created using the forgejo web interface and we have disabled
manually all the forgejo modules except the <code>code</code> one for them):</p>
<div class="ulist"><ul><li><a href="https://forgejo.mixinet.net/actions/checkout" target="_blank" rel="noopener">actions/checkout</a>: Action for checking out a repo.</li><li><a href="https://forgejo.mixinet.net/docker/login-action" target="_blank" rel="noopener">docker/login-action</a>: Action to login against a Docker registry.</li><li><a href="https://forgejo.mixinet.net/docker/setup-buildx-action" target="_blank" rel="noopener">docker/setup-buildx-action</a>: Action to set up Docker Buildx.</li><li><a href="https://forgejo.mixinet.net/docker/build-push-action" target="_blank" rel="noopener">docker/build-push-action</a>: Action to build and push Docker images with Buildx.</li></ul></div>
<p>To use our actions by default (i.e., without needing to add the server URL on the <code>uses</code> keyword) we have added the
following section to the <code>app.ini</code> file of our forgejo server:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="ini"><span class="nn">[actions]</span>
<span class="py">ENABLED</span> <span class="p">=</span> <span class="s">true</span>
<span class="py">DEFAULT_ACTIONS_URL</span> <span class="p">=</span> <span class="s">https://forgejo.mixinet.net</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_setting_up_credentials_to_push_images">Setting up credentials to push images</h3><p>To be able to push images to the <code>oci</code> organization I’ve created a <code>token</code> with <code>package:write</code> permission for my own
user because I’m a member of the organization and I’m authorized to publish packages on it (a different user could be
created, but as I said this is for personal use, so there is no need to complicate things for now).</p>
<p>To allow the use of those credentials on the actions I have added a <code>secret</code> (<code>REGISTRY_PASS</code>) and a <code>variable</code>
(<code>REGISTRY_USER</code>) to the <code>oci</code> organization to allow the actions to use them.</p>
<p>I’ve also logged myself on my local docker client to be able to push images to the <code>oci</code> group by hand, as I it is
needed for bootstrapping the system (as I’m using local images on the worflows I need to push them to the server before
running the ones that are used to build the images).</p></section>
<section class="doc-section level-2"><h3 id="_local_and_mirrored_images">Local and mirrored images</h3><p>Our images will be stored on the packages section of a new organization called <a href="https://forgejo.mixinet.net/oci/" target="_blank" rel="noopener">oci</a>, inside it we have
created two projects that use forgejo actions to keep things in shape:</p>
<div class="ulist"><ul><li><a href="https://forgejo.mixinet.net/oci/images/" target="_blank" rel="noopener">images</a>: contains the source files used to generate our own images and the actions to build, tag and
push them to the <code>oci</code> organization group.</li><li><a href="https://forgejo.mixinet.net/oci/mirrors/" target="_blank" rel="noopener">mirrors</a>: contains a configuration file for the <a href="https://regclient.org/usage/regsync/" target="_blank" rel="noopener">regsync</a> tool to mirror containers and an
action to run it.</li></ul></div>
<p>On the next sections we are going to describe the actions and images we have created and mirrored from those projects.</p></section></section>
<section class="doc-section level-1"><h2 id="_the_ociimages_project">The <code>oci/images</code> project</h2><p>The <code>images</code> project is a monorepo that contains the source files for the images we are going to build and a couple of
actions.</p>
<p>The image sources are on sub directories of the repository, to be considered an image the folder has to contain a
<code>Dockerfile</code> that will be used to build the image.</p>
<p>The repository has two workflows:</p>
<div class="ulist"><ul><li><code>build-image-from-tag</code>: Workflow to build, tag and push an image to the <code>oci</code> organization</li><li><code>multi-semantic-release</code>: Workflow to create tags for the images using the <code>multi-semantic-release</code> tool.</li></ul></div>
<p>As the workflows are already configured to use some of our images we pushed some of them from a checkout of the
repository using the following commands:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">registry</span><span class="o">=</span><span class="s2">&#34;forgejo.mixinet.net/oci&#34;</span>
<span class="k">for </span>img <span class="k">in </span>alpine-mixinet node-mixinet multi-semantic-release<span class="p">;</span> <span class="k">do
  </span>docker build <span class="nt">-t</span> <span class="nv">$registry</span>/<span class="nv">$img</span>:1.0.0 <span class="nv">$img</span>
  docker tag <span class="nv">$registry</span>/<span class="nv">$img</span>:1.0.0 <span class="nv">$registry</span>/<span class="nv">$img</span>:latest
  docker push <span class="nv">$registry</span>/<span class="nv">$img</span>:1.0.0
  docker push <span class="nv">$registry</span>/<span class="nv">$img</span>:latest
<span class="k">done</span></code></pre></div>
<p>On the next sub sections we will describe what the workflows do and will show their source code.</p>
<section class="doc-section level-2"><h3 id="_build_image_from_tag_workflow"><code>build-image-from-tag</code> workflow</h3><p>This workflow uses a <code>docker</code> client to build an image from a tag on the repository with the format
<code>image-name-v[0-9].[0-9].[0-9]+</code>.</p>
<p>As the <code>runner</code> is executed on a <code>container</code> (instead of using <code>lxc</code>) it seemed unreasonable to run another <code>dind</code>
container from that one, that is why, after some tests, I decided to share the <code>dind</code> service server socket with the
<code>runner</code> container and enabled the option to mount it also on the containers launched by the runner when needed (I only
do it on the <code>build-image-from-tag</code> action for now).</p>
<p>The action was configured to run using a trigger or when new tags with the right format were created, but when the tag
is created by <code>multi-semantic-release</code> the trigger does not work for some reason, so now it only runs the job on
triggers and checks if it is launched for a tag with the right format on the job itself.</p>
<p>The source code of the action is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">name</span><span class="pi">:</span> <span class="s">build-image-from-tag</span>
<span class="na">on</span><span class="pi">:</span>
  <span class="na">workflow_dispatch</span><span class="pi">:</span>
<span class="na">jobs</span><span class="pi">:</span>
  <span class="na">build</span><span class="pi">:</span>
    <span class="c1"># Don&#39;t build the image if the registry credentials are not set, the ref is not a tag or it doesn&#39;t contain &#39;-v&#39;</span>
    <span class="na">if</span><span class="pi">:</span> <span class="s">${{ vars.REGISTRY_USER != &#39;&#39; &amp;&amp; secrets.REGISTRY_PASS != &#39;&#39; &amp;&amp; startsWith(github.ref, &#39;refs/tags/&#39;) &amp;&amp; contains(github.ref, &#39;-v&#39;) }}</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">docker</span>
    <span class="na">container</span><span class="pi">:</span>
      <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/node-mixinet:latest</span>
      <span class="c1"># Mount the dind socket on the container at the default location</span>
      <span class="na">options</span><span class="pi">:</span> <span class="s">-v /dind/docker.sock:/var/run/docker.sock</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Extract image name and tag from git and get registry name from env</span>
        <span class="na">id</span><span class="pi">:</span> <span class="s">job_data</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">echo &#34;::set-output name=img_name::${GITHUB_REF_NAME%%-v*}&#34;</span>
          <span class="s">echo &#34;::set-output name=img_tag::${GITHUB_REF_NAME##*-v}&#34;</span>
          <span class="s">echo &#34;::set-output name=registry::$(</span>
            <span class="s">echo &#34;${{ github.server_url }}&#34; | sed -e &#39;s%https://%%&#39;</span>
          <span class="s">)&#34;</span>
          <span class="s">echo &#34;::set-output name=oci_registry_prefix::$(</span>
            <span class="s">echo &#34;${{ github.server_url }}/oci&#34; | sed -e &#39;s%https://%%&#39;</span>
          <span class="s">)&#34;</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout the repo</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Export build dir and Dockerfile</span>
        <span class="na">id</span><span class="pi">:</span> <span class="s">build_data</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">img=&#34;${{ steps.job_data.outputs.img_name }}&#34;</span>
          <span class="s">build_dir=&#34;$(pwd)/${img}&#34;</span>
          <span class="s">dockerfile=&#34;${build_dir}/Dockerfile&#34;</span>
          <span class="s">if [ -f &#34;$dockerfile&#34; ]; then</span>
            <span class="s">echo &#34;::set-output name=build_dir::$build_dir&#34;</span>
            <span class="s">echo &#34;::set-output name=dockerfile::$dockerfile&#34;</span>
          <span class="s">else</span>
            <span class="s">echo &#34;Couldn&#39;t find the Dockerfile for the &#39;$img&#39; image&#34;</span>
            <span class="s">exit 1</span>
          <span class="s">fi</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Login to the Container Registry</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">docker/login-action@v3</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">registry</span><span class="pi">:</span> <span class="s">${{ steps.job_data.outputs.registry }}</span>
          <span class="na">username</span><span class="pi">:</span> <span class="s">${{ vars.REGISTRY_USER }}</span>
          <span class="na">password</span><span class="pi">:</span> <span class="s">${{ secrets.REGISTRY_PASS }}</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Set up Docker Buildx</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">docker/setup-buildx-action@v3</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Build and Push</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">docker/build-push-action@v6</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">push</span><span class="pi">:</span> <span class="kc">true</span>
          <span class="na">tags</span><span class="pi">:</span> <span class="pi">|</span>
            <span class="s">${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:${{ steps.job_data.outputs.img_tag }}</span>
            <span class="s">${{ steps.job_data.outputs.oci_registry_prefix }}/${{ steps.job_data.outputs.img_name }}:latest</span>
          <span class="na">context</span><span class="pi">:</span> <span class="s">${{ steps.build_data.outputs.build_dir }}</span>
          <span class="na">file</span><span class="pi">:</span> <span class="s">${{ steps.build_data.outputs.dockerfile }}</span>
          <span class="na">build-args</span><span class="pi">:</span> <span class="pi">|</span>
            <span class="s">OCI_REGISTRY_PREFIX=${{ steps.job_data.outputs.oci_registry_prefix }}/</span></code></pre></div>
<p>Some notes about this code:</p>
<div class="olist arabic"><ol class="arabic"><li>The <code>if</code> condition of the <code>build</code> job is not perfect, but it is good enough to avoid wrong uses as long as nobody
uses manual tags with the wrong format and expects things to work (it checks if the <code>REGISTRY_USER</code> and
<code>REGISTRY_PASS</code> variables are set, if the <code>ref</code> is a tag and if it contains the <code>-v</code> string).</li><li>To be able to access the <code>dind</code> socket we mount it on the container using the <code>options</code> key on the <code>container</code> section
of the job (this only works if supported by the runner configuration as explained before).</li><li>We use the <code>job_data</code> step to get information about the image from the tag and the registry URL from the environment
variables, it is executed first because all the information is available without checking out the repository.</li><li>We use the <code>job_data</code> step to get the build dir and <code>Dockerfile</code> paths from the repository (right now we are
assuming fixed paths and checking if the <code>Dockerfile</code> exists, but in the future we could use a configuration file to
get them, if needed).</li><li>As we are using a docker daemon that is already running there is no need to use the
<a href="https://github.com/docker/setup-docker-action">docker/setup-docker-action</a> to install it.</li><li>On the build and push step we pass the <code>OCI_REGISTRY_PREFIX</code> build argument to the <code>Dockerfile</code> to be able to use it
on the <code>FROM</code> instruction (we are using it in our images).</li></ol></div></section>
<section class="doc-section level-2"><h3 id="_multi_semantic_release_workflow"><code>multi-semantic-release</code> workflow</h3><p>This workflow is used to run the <code>multi-semantic-release</code> tool on pushes to the <code>main</code> branch.</p>
<p>It is configured to create the configuration files on the fly (it prepares things to tag the folders that contain a
<code>Dockerfile</code> using a couple of template files available on the repository’s <code>.forgejo</code> directory) and run the
<code>multi-semantic-release</code> tool to create tags and push them to the repository if new versions are to be built.</p>
<p>Initially we assumed that the tag creation pushed by <code>multi-semantic-release</code> would be enough to run the
<code>build-tagged-image-task</code> action, but as it didn’t work we removed the rule to run the action on tag creation and added
code to trigger the action using an api call for the newly created tags (we get them from the output of the
<code>multi-semantic-release</code> execution).</p>
<p>The source code of the action is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">name</span><span class="pi">:</span> <span class="s">multi-semantic-release</span>
<span class="na">on</span><span class="pi">:</span>
  <span class="na">push</span><span class="pi">:</span>
    <span class="na">branches</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">main&#39;</span>
<span class="na">jobs</span><span class="pi">:</span>
  <span class="na">multi-semantic-release</span><span class="pi">:</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">docker</span>
    <span class="na">container</span><span class="pi">:</span>
      <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/multi-semantic-release:latest</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout the repo</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Generate multi-semantic-release configuration</span>
        <span class="na">shell</span><span class="pi">:</span> <span class="s">sh</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s"># Get the list of images to work with (the folders that have a Dockerfile)</span>
          <span class="s">images=&#34;$(for img in */Dockerfile; do dirname &#34;$img&#34;; done)&#34;</span>
          <span class="s"># Generate a values.yaml file for the main packages.json file</span>
          <span class="s">package_json_values_yaml=&#34;.package.json-values.yaml&#34;</span>
          <span class="s">echo &#34;images:&#34; &gt;&#34;$package_json_values_yaml&#34;</span>
          <span class="s">for img in $images; do</span>
            <span class="s">echo &#34; - $img&#34; &gt;&gt;&#34;$package_json_values_yaml&#34;</span>
          <span class="s">done</span>
          <span class="s">echo &#34;::group::Generated values.yaml for the project&#34;</span>
          <span class="s">cat &#34;$package_json_values_yaml&#34;</span>
          <span class="s">echo &#34;::endgroup::&#34;</span>
          <span class="s"># Generate the package.json file validating that is a good json file with jq</span>
          <span class="s">tmpl -f &#34;$package_json_values_yaml&#34; &#34;.forgejo/package.json.tmpl&#34; | jq . &gt; &#34;package.json&#34;</span>
          <span class="s">echo &#34;::group::Generated package.json for the project&#34;</span>
          <span class="s">cat &#34;package.json&#34;</span>
          <span class="s">echo &#34;::endgroup::&#34;</span>
          <span class="s"># Remove the temporary values file</span>
          <span class="s">rm -f &#34;$package_json_values_yaml&#34;</span>
          <span class="s"># Generate the package.json file for each image</span>
          <span class="s">for img in $images; do</span>
            <span class="s">tmpl -v &#34;img_name=$img&#34; -v &#34;img_path=$img&#34; &#34;.forgejo/ws-package.json.tmpl&#34; | jq . &gt; &#34;$img/package.json&#34;</span>
            <span class="s">echo &#34;::group::Generated package.json for the &#39;$img&#39; image&#34;</span>
            <span class="s">cat &#34;$img/package.json&#34;</span>
            <span class="s">echo &#34;::endgroup::&#34;</span>
          <span class="s">done</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Run multi-semantic-release</span>
        <span class="na">shell</span><span class="pi">:</span> <span class="s">sh</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">multi-semantic-release | tee .multi-semantic-release.log</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Trigger builds</span>
        <span class="na">shell</span><span class="pi">:</span> <span class="s">sh</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s"># Get the list of tags published on the previous steps</span>
          <span class="s">tags=&#34;$(</span>
            <span class="s">sed -n -e &#39;s/^\[.*\] \[\(.*\)\] .* Published release \([0-9]\+\.[0-9]\+\.[0-9]\+\) on .*$/\1-v\2/p&#39; \</span>
              <span class="s">.multi-semantic-release.log</span>
          <span class="s">)&#34;</span>
          <span class="s">rm -f .multi-semantic-release.log</span>
          <span class="s">if [ &#34;$tags&#34; ]; then</span>
            <span class="s"># Prepare the url for building the images</span>
            <span class="s">workflow=&#34;build-image-from-tag.yaml&#34;</span>
            <span class="s">dispatch_url=&#34;${{ github.api_url }}/repos/${{ github.repository }}/actions/workflows/$workflow/dispatches&#34;</span>
            <span class="s">echo &#34;$tags&#34; | while read -r tag; do</span>
              <span class="s">echo &#34;Triggering build for tag &#39;$tag&#39;&#34;</span>
              <span class="s">curl \</span>
                <span class="s">-H &#34;Content-Type:application/json&#34; \</span>
                <span class="s">-H &#34;Authorization: token ${{ secrets.GITHUB_TOKEN }}&#34; \</span>
                <span class="s">-d &#34;{\&#34;ref\&#34;:\&#34;$tag\&#34;}&#34; &#34;$dispatch_url&#34;</span>
            <span class="s">done</span>
          <span class="s">fi</span></code></pre></div>
<p>Notes about this code:</p>
<div class="olist arabic"><ol class="arabic"><li>The use of the <code>tmpl</code> tool to process the <code>multi-semantic-release</code> configuration templates comes from previous uses,
but on this case we could use a different approach (i.e. <code>envsubst</code> could be used) but we left it because it keeps
things simple and can be useful in the future if we want to do more complex things with the template files.</li><li>We use <code>tee</code> to show and dump to a file the output of the <code>multi-semantic-release</code> execution.</li><li>We get the list of pushed <code>tags</code> using <code>sed</code> against the output of the <code>multi-semantic-release</code> execution and for
each one found we use <code>curl</code> to call the <code>forgejo</code> API to trigger the build job; as the call is against the same
project we can use the <code>GITHUB_TOKEN</code> generated for the workflow to do it, without creating a user token that has to
be shared as a secret.</li></ol></div>
<p>The <code>.forgejo/package.json.tmpl</code> file is the following one:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;name&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;multi-semantic-release&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;version&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;0.0.0-semantically-released&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;private&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;multi-release&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">&#34;tagFormat&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${name}-v${version}&#34;</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">&#34;workspaces&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{{</span><span class="w"> </span><span class="err">.images</span><span class="w"> </span><span class="err">|</span><span class="w"> </span><span class="err">toJson</span><span class="w"> </span><span class="p">}}</span><span class="w">
</span><span class="p">}</span></code></pre></div>
<p>As can be seen it only needs a list of paths to the images as argument (the file we generate contains the names and
paths, but it could be simplified).</p>
<p>And the <code>.forgejo/ws-package.json.tmpl</code> file is the following one:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;name&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;{{ .img_name }}&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;license&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;UNLICENSED&#34;</span><span class="p">,</span><span class="w">
  </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">&#34;plugins&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
      </span><span class="p">[</span><span class="w">
        </span><span class="s2">&#34;@semantic-release/commit-analyzer&#34;</span><span class="p">,</span><span class="w">
        </span><span class="p">{</span><span class="w">
          </span><span class="nl">&#34;preset&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;conventionalcommits&#34;</span><span class="p">,</span><span class="w">
          </span><span class="nl">&#34;releaseRules&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
            </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;breaking&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;major&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
            </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;revert&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;patch&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
            </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;type&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;feat&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;minor&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
            </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;type&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;fix&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;patch&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
            </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;type&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;perf&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;patch&#34;</span><span class="w"> </span><span class="p">}</span><span class="w">
          </span><span class="p">]</span><span class="w">
        </span><span class="p">}</span><span class="w">
      </span><span class="p">],</span><span class="w">
      </span><span class="p">[</span><span class="w">
        </span><span class="s2">&#34;semantic-release-replace-plugin&#34;</span><span class="p">,</span><span class="w">
        </span><span class="p">{</span><span class="w">
          </span><span class="nl">&#34;replacements&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
            </span><span class="p">{</span><span class="w">
              </span><span class="nl">&#34;files&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="s2">&#34;{{ .img_path }}/msr.yaml&#34;</span><span class="w"> </span><span class="p">],</span><span class="w">
              </span><span class="nl">&#34;from&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;^version:.*$&#34;</span><span class="p">,</span><span class="w">
              </span><span class="nl">&#34;to&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;version: ${nextRelease.version}&#34;</span><span class="p">,</span><span class="w">
              </span><span class="nl">&#34;allowEmptyPaths&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
            </span><span class="p">}</span><span class="w">
          </span><span class="p">]</span><span class="w">
        </span><span class="p">}</span><span class="w">
      </span><span class="p">],</span><span class="w">
      </span><span class="p">[</span><span class="w">
        </span><span class="s2">&#34;@semantic-release/git&#34;</span><span class="p">,</span><span class="w">
        </span><span class="p">{</span><span class="w">
          </span><span class="nl">&#34;assets&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="s2">&#34;msr.yaml&#34;</span><span class="w"> </span><span class="p">],</span><span class="w">
          </span><span class="nl">&#34;message&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;ci(release): {{ .img_name }}-v${nextRelease.version}</span><span class="se">\n\n</span><span class="s2">${nextRelease.notes}&#34;</span><span class="w">
        </span><span class="p">}</span><span class="w">
      </span><span class="p">]</span><span class="w">
    </span><span class="p">],</span><span class="w">
    </span><span class="nl">&#34;branches&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="s2">&#34;main&#34;</span><span class="w"> </span><span class="p">]</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span></code></pre></div></section></section>
<section class="doc-section level-1"><h2 id="_the_ocimirrors_project">The <code>oci/mirrors</code> project</h2><p>The repository contains a template for the configuration file we are going to use with  <code>regsync</code>
(<code>regsync.envsubst.yml</code>) to mirror images from remote registries using a workflow that generates a configuration file
from the template and runs the tool.</p>
<p>The initial version of the <code>regsync.envsubst.yml</code> file is prepared to mirror <code>alpine</code> containers from version <code>3.21</code> to
<code>3.29</code> (we explicitly remove version <code>3.20</code>) and needs the <code>forgejo.mixinet.net/oci/node-mixinet:latest</code> image to run
(as explained before it was pushed manually to the server):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">version</span><span class="pi">:</span> <span class="m">1</span>
<span class="na">creds</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">registry</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">$REGISTRY&#34;</span>
    <span class="na">user</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">$REGISTRY_USER&#34;</span>
    <span class="na">pass</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">$REGISTRY_PASS&#34;</span>
<span class="na">sync</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">source</span><span class="pi">:</span> <span class="s">alpine</span>
    <span class="na">target</span><span class="pi">:</span> <span class="s">$REGISTRY/oci/alpine</span>
    <span class="na">type</span><span class="pi">:</span> <span class="s">repository</span>
    <span class="na">tags</span><span class="pi">:</span>
      <span class="na">allow</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">latest&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">3</span><span class="se">\\</span><span class="s">.2</span><span class="se">\\</span><span class="s">d+&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">3</span><span class="se">\\</span><span class="s">.2</span><span class="se">\\</span><span class="s">d+.</span><span class="se">\\</span><span class="s">d+&#34;</span>
      <span class="na">deny</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">3</span><span class="se">\\</span><span class="s">.20&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">3</span><span class="se">\\</span><span class="s">.20.</span><span class="se">\\</span><span class="s">d+&#34;</span></code></pre></div>
<section class="doc-section level-2"><h3 id="_mirror_workflow"><code>mirror</code> workflow</h3><p>The <code>mirror</code> workflow creates a configuration file replacing the value of the <code>REGISTRY</code> environment variable (computed
by removing the protocol from  the <code>server_url</code>), the <code>REGISTRY_USER</code> organization value and the <code>REGISTRY_PASS</code> secret
using the <code>envsubst</code> command and running the <code>regsync</code> tool to mirror the images using the configuration file.</p>
<p>The action is configured to run daily, on push events when the <code>regsync.envsubst.yml</code> file is modified on the <code>main</code>
branch and can also be triggered manually.</p>
<p>The source code of the action is as follows:</p>
<figure class="listing-block"><figcaption>.forgejo/workflows/mirror.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">name</span><span class="pi">:</span> <span class="s">mirror</span>
<span class="na">on</span><span class="pi">:</span>
  <span class="na">schedule</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="na">cron</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">@daily&#39;</span>
  <span class="na">push</span><span class="pi">:</span>
    <span class="na">branches</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">main</span>
    <span class="na">paths</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">regsync.envsubst.yml&#39;</span>
  <span class="na">workflow_dispatch</span><span class="pi">:</span>
<span class="na">jobs</span><span class="pi">:</span>
  <span class="na">mirror</span><span class="pi">:</span>
    <span class="na">if</span><span class="pi">:</span> <span class="s">${{ vars.REGISTRY_USER != &#39;&#39; &amp;&amp; secrets.REGISTRY_PASS != &#39;&#39; }}</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">docker</span>
    <span class="na">container</span><span class="pi">:</span>
      <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/node-mixinet:latest</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Sync images</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">REGISTRY=&#34;$(echo &#34;${{ github.server_url }}&#34; | sed -e &#39;s%https://%%&#39;)&#34; \</span>
          <span class="s">REGISTRY_USER=&#34;${{ vars.REGISTRY_USER }}&#34; \</span>
          <span class="s">REGISTRY_PASS=&#34;${{ secrets.REGISTRY_PASS }}&#34; \</span>
            <span class="s">envsubst &lt;regsync.envsubst.yml &gt;.regsync.yml</span>
          <span class="s">regsync --config .regsync.yml once</span>
          <span class="s">rm -f .regsync.yml</span></code></pre></figure></section></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>We have installed a <code>forgejo-runner</code> and configured it to run actions for our own server and things are working fine.</p>
<p>This approach allows us to have a powerful CI/CD system on a modest home server, something very useful for maintaining
personal projects and playing with things without needing SaaS platforms like <a href="https://github.com/" target="_blank" rel="noopener">github</a> or
<a href="https://gitlab.com/" target="_blank" rel="noopener">gitlab</a>.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Testing DeepSeek with Ollama and Open WebUI</title>
      <link>https://blogops.mixinet.net/posts/testing_deepseek_with_ollama_and_open_webui/</link>
      <pubDate>Mon, 03 Feb 2025 18:20:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/testing_deepseek_with_ollama_and_open_webui/</guid>
      <description>&lt;p&gt;With all the recent buzz about &lt;a href=&#34;https://www.deepseek.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;DeepSeek&lt;/a&gt; and its capabilities, I’ve decided to give it a try
using &lt;a href=&#34;https://ollama.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Ollama&lt;/a&gt; and &lt;a href=&#34;https://openwebui.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Open WebUI&lt;/a&gt; on my work laptop which has an NVIDIA GPU:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;console&#34;&gt;&lt;span class=&#34;gp&#34;&gt;$&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;lspci | &lt;span class=&#34;nb&#34;&gt;grep &lt;/span&gt;NVIDIA
&lt;span class=&#34;go&#34;&gt;0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;For the installation I initially I looked into the approach suggested on this
&lt;a href=&#34;https://www.theregister.com/2025/01/26/deepseek_r1_ai_cot/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;article&lt;/a&gt;, but after reviewing it I decided to go for a
&lt;code&gt;docker&lt;/code&gt; only approach, as it leaves my system clean and updates are easier.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_step_0_install_docker&#34;&gt;Step 0: Install docker&lt;/h2&gt;&lt;p&gt;I already had it on my machine, so nothing to do here.&lt;/p&gt;&lt;/section&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_step_1_install_the_nvidia_container_toolkit_package&#34;&gt;Step 1: Install the nvidia-container-toolkit package&lt;/h2&gt;&lt;p&gt;As it is needed to use the NVIDIA GPU with &lt;code&gt;docker&lt;/code&gt; I followed the instructions to install the package using
&lt;a href=&#34;https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-with-apt&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;apt&lt;/a&gt;
from the NVIDIA website.&lt;/p&gt;&lt;/section&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_step_2_run_the_open_webui_container_bundled_with_ollama&#34;&gt;Step 2: Run the Open WebUI container bundled with Ollama&lt;/h2&gt;&lt;p&gt;I could install ollama directly on &lt;a href=&#34;https://github.com/ollama/ollama/blob/main/docs/linux.md&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;linux&lt;/a&gt; or run it on
&lt;a href=&#34;https://github.com/ollama/ollama/blob/main/docs/docker.md&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;docker&lt;/a&gt;, but I found out that there is a container with
&lt;a href=&#34;https://openwebui.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Open WebUI&lt;/a&gt; bundled with &lt;a href=&#34;https://ollama.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Ollama&lt;/a&gt;, so I decided to use
&lt;a href=&#34;https://docs.openwebui.com/#open-webui-bundled-with-ollama&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;it&lt;/a&gt; instead.&lt;/p&gt;
&lt;p&gt;To start the container I’ve executed the following command:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;shell&#34;&gt;docker run &lt;span class=&#34;nt&#34;&gt;-d&lt;/span&gt; &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  &lt;span class=&#34;nt&#34;&gt;-e&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;OLLAMA_HOST&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;0.0.0.0:11434&amp;#34;&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-p&lt;/span&gt; 127.0.0.1:11434:11434 &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  &lt;span class=&#34;nt&#34;&gt;-p&lt;/span&gt; 127.0.0.1:3000:8080 &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  &lt;span class=&#34;nt&#34;&gt;-v&lt;/span&gt; ollama:/root/.ollama &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  &lt;span class=&#34;nt&#34;&gt;-v&lt;/span&gt; open-webui:/app/backend/data &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  &lt;span class=&#34;nt&#34;&gt;--gpus&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;all &lt;span class=&#34;nt&#34;&gt;--name&lt;/span&gt; open-webui &lt;span class=&#34;nt&#34;&gt;--restart&lt;/span&gt; always &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  ghcr.io/open-webui/open-webui:ollama&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>With all the recent buzz about <a href="https://www.deepseek.com/" target="_blank" rel="noopener">DeepSeek</a> and its capabilities, I’ve decided to give it a try
using <a href="https://ollama.com/" target="_blank" rel="noopener">Ollama</a> and <a href="https://openwebui.com/" target="_blank" rel="noopener">Open WebUI</a> on my work laptop which has an NVIDIA GPU:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>lspci | <span class="nb">grep </span>NVIDIA
<span class="go">0000:01:00.0 3D controller: NVIDIA Corporation GA107GLM [RTX A2000 8GB Laptop GPU]
             (rev a1)</span></code></pre></div>
<p>For the installation I initially I looked into the approach suggested on this
<a href="https://www.theregister.com/2025/01/26/deepseek_r1_ai_cot/" target="_blank" rel="noopener">article</a>, but after reviewing it I decided to go for a
<code>docker</code> only approach, as it leaves my system clean and updates are easier.</p>
<section class="doc-section level-1"><h2 id="_step_0_install_docker">Step 0: Install docker</h2><p>I already had it on my machine, so nothing to do here.</p></section>
<section class="doc-section level-1"><h2 id="_step_1_install_the_nvidia_container_toolkit_package">Step 1: Install the nvidia-container-toolkit package</h2><p>As it is needed to use the NVIDIA GPU with <code>docker</code> I followed the instructions to install the package using
<a href="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-with-apt" target="_blank" rel="noopener">apt</a>
from the NVIDIA website.</p></section>
<section class="doc-section level-1"><h2 id="_step_2_run_the_open_webui_container_bundled_with_ollama">Step 2: Run the Open WebUI container bundled with Ollama</h2><p>I could install ollama directly on <a href="https://github.com/ollama/ollama/blob/main/docs/linux.md" target="_blank" rel="noopener">linux</a> or run it on
<a href="https://github.com/ollama/ollama/blob/main/docs/docker.md" target="_blank" rel="noopener">docker</a>, but I found out that there is a container with
<a href="https://openwebui.com/" target="_blank" rel="noopener">Open WebUI</a> bundled with <a href="https://ollama.com/" target="_blank" rel="noopener">Ollama</a>, so I decided to use
<a href="https://docs.openwebui.com/#open-webui-bundled-with-ollama" target="_blank" rel="noopener">it</a> instead.</p>
<p>To start the container I’ve executed the following command:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">docker run <span class="nt">-d</span> <span class="se">\</span>
  <span class="nt">-e</span> <span class="nv">OLLAMA_HOST</span><span class="o">=</span><span class="s2">&#34;0.0.0.0:11434&#34;</span> <span class="nt">-p</span> 127.0.0.1:11434:11434 <span class="se">\</span>
  <span class="nt">-p</span> 127.0.0.1:3000:8080 <span class="se">\</span>
  <span class="nt">-v</span> ollama:/root/.ollama <span class="se">\</span>
  <span class="nt">-v</span> open-webui:/app/backend/data <span class="se">\</span>
  <span class="nt">--gpus</span><span class="o">=</span>all <span class="nt">--name</span> open-webui <span class="nt">--restart</span> always <span class="se">\</span>
  ghcr.io/open-webui/open-webui:ollama</code></pre></div>
<p>Once the container is up and running, I can access the Open WebUI on <a class="bare" href="http://localhost:3000/">http://localhost:3000/</a> and the Ollama API on
<a class="bare" href="http://localhost:11434/">http://localhost:11434/</a> (I added the <code>OLLAMA_HOST</code> environment variable to be bind the server to all the container IPs,
as it is needed to export it to the host).</p></section>
<section class="doc-section level-1"><h2 id="_step_3_install_deepseek_r1_model_from_open_webui">Step 3: Install DeepSeek-R1 model from Open WebUI</h2><p>After creating an <code>admin</code> user on the Open WebUI, I went to the <code>select a model</code> button and asked to download the
<a href="https://ollama.com/library/deepseek-r1" target="_blank" rel="noopener">deepseek-r1</a> model (at the time of writing this it matches version
<a href="https://ollama.com/library/deepseek-r1:7b" target="_blank" rel="noopener">deepseek-r1:7b</a>, which uses 4.7GB of space).</p>
<p>After the download was completed I was able to start chatting with the model on my local machine.</p></section>
<section class="doc-section level-1"><h2 id="_step_4_ask_a_question">Step 4: Ask a question</h2><p>The first question I asked was <em>Can you tell me what is Debian?</em> and the answer I got was:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="markdown">Debian is one of the most popular Linux distributions used worldwide. It is an
open-source operating system that has been around since 1993 and is known for
its stability, security, and reliability. Debian is widely used in servers, data
centers, and enterprise environments because it provides a robust base platform
with excellent documentation, tools, and community support.

<span class="gu">### Key Features of Debian:</span>
<span class="p">
1.</span> <span class="gs">**History**</span>:
<span class="p">   -</span> Debian started as a project called &#34;Pop!!&#34; (Pootle) led by Lars Knutsen, a
     developer from Denmark.
<span class="p">   -</span> It was initially forked from the Linux distribution Red Hat&#39;s RH-Debian to
     gain independence and reduce reliance on Red Hat.
<span class="p">
2.</span> <span class="gs">**Use Cases**</span>:
<span class="p">   -</span> Debian is commonly used as an operating system for servers due to its
     reliability and ease of use in enterprise environments.
<span class="p">   -</span> It is also used for individual user machines (laptops, desktops) because it
     offers a smooth and stable experience.
<span class="p">   -</span> Debian provides the foundation for many popular derivatives like Ubuntu,
     which are designed for consumer use.
<span class="p">
3.</span> <span class="gs">**Supported Architectures**</span>:
<span class="p">   -</span> Debian supports all major Linux architectures, including 32-bit and 64-bit
     systems, and runs on a wide range of hardware from old servers to modern
     smartphones.
<span class="p">
4.</span> <span class="gs">**Package Management**</span>:
<span class="p">   -</span> Debian comes with a robust package management system called <span class="ge">**</span>Debian Policy
     Language (DPL)<span class="ge">**</span>, which allows users to easily install, update, and manage
     software packages.
<span class="p">   -</span> This makes it easy for users to keep their systems up to date without
     needing to download multiple repositories or deal with version conflicts.
<span class="p">
5.</span> <span class="gs">**Security Focus**</span>:
<span class="p">   -</span> Debian is known for its strong security practices. It has a reputation for
     being one of the most stable Linux distributions, making it a favorite
     among organizations that prioritize security.
<span class="p">   -</span> The distribution regularly updates its software packages to patch
     vulnerabilities and protect against common attacks.
<span class="p">
6.</span> <span class="gs">**Compatibility with Servers**</span>:
<span class="p">   -</span> Debian is widely used in data centers because it provides a consistent and
     reliable environment for running servers and applications.
<span class="p">   -</span> Many open-source projects are compatible with Debian, making it an ideal
     choice for developers who want to work on servers or distributed systems.
<span class="p">
7.</span> <span class="gs">**Popularity**</span>:
<span class="p">   -</span> Debian has a large and active community of users and contributors.
<span class="p">   -</span> It is one of the most widely used Linux distributions in the world, with
     many enterprise organizations relying on it as their primary operating
     system.

<span class="gu">### Comparison with Other Distributions:</span>
<span class="p">-</span> <span class="gs">**Ubuntu**</span>: Ubuntu is based on Debian but adds graphical user interfaces (like
  Gutsy Gibbon), modern tools (like GIMP and LibreOffice), and integrates with
  Microsoft services.
<span class="p">-</span> <span class="gs">**RHEL (Red Hat Enterprise Linux)**</span>: RHEL is a derivative of Red Hat&#39;s
  distribution, which forked from Debian. It focuses more heavily on enterprise
  security and support compared to Debian.

<span class="gu">### Why Debian?</span>
Debian is popular because:
<span class="p">-</span> It is free and open-source, making it accessible to anyone.
<span class="p">-</span> It provides excellent documentation and tools for system administrators.
<span class="p">-</span> It has a strong focus on stability and reliability.
<span class="p">-</span> It supports a wide range of hardware and software needs.

If you&#39;re looking for a reliable and flexible Linux distribution that works well
in both personal and enterprise environments, Debian is an excellent choice.</code></pre></div>
<p>Haven’t tried the same question with ChatGPT, but the <em>History</em>, <em>Package Management</em> and <em>Comparison with Other
Distributions</em> sections are hilarious, but they provided me with important data:</p>
<div class="ulist"><ul><li>Now I know that APT is the real <em>Debian Project Leader</em>, what don’t know is why we have elections each year …​ ;)</li><li>Linux Distributions are recursive …​ Debian was a fork of Red Hat, which was a fork of Debian, which was a fork of
Red Hat, …​</li></ul></div>
<p>As everybody is testing the model I will not talk more about the chat and the results, I just thought that this answer
was really funny.</p></section>
<section class="doc-section level-1"><h2 id="_step_5_install_the_deepseek_coder_and_deepseek_coder_v2_models_from_open_webui">Step 5: Install the DeepSeek Coder and DeepSeek Coder v2 models from Open WebUI</h2><p>As done before, to download the models I went to the <code>select a model</code> button and asked to download the
<a href="https://ollama.com/library/deepseek-coder" target="_blank" rel="noopener">deepseek-coder</a> and
<a href="https://ollama.com/library/deepseek-coder-v2" target="_blank" rel="noopener">deepseek-coder-v2</a> models (the default version of version one is said to
be really quick and small, while version two is supposed to be better but slower and bigger, so I decided to install
both for testing).</p></section>
<section class="doc-section level-1"><h2 id="_step_6_integrate_ollama_with_neovim">Step 6: Integrate Ollama with Neovim</h2><p>Since some months ago I’ve been using <a href="https://github.com/github/copilot.vim" target="_blank" rel="noopener">Github Copilot with Neovim</a>; I don’t
feel it has been very helpful in the general case, but I wanted to try it and it comes handy when you need to perform
repetitive tasks when programming.</p>
<p>It seems that there are multiple <code>neovim</code> plugins that support <code>ollama</code>, for now I’ve installed and configured the
<a href="https://codecompanion.olimorris.dev/" target="_blank" rel="noopener">codecompanion</a> plugin on my <code>config.lua</code> file using <code>packer</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="lua"><span class="nb">require</span><span class="p">(</span><span class="s1">&#39;packer&#39;</span><span class="p">).</span><span class="n">startup</span><span class="p">(</span><span class="k">function</span><span class="p">()</span>
  <span class="p">[</span><span class="o">...</span><span class="p">]</span>
  <span class="c1">-- Codecompanion plugin</span>
  <span class="n">use</span> <span class="p">{</span>
    <span class="s2">&#34;olimorris/codecompanion.nvim&#34;</span><span class="p">,</span>
    <span class="n">requires</span> <span class="o">=</span> <span class="p">{</span>
      <span class="s2">&#34;nvim-lua/plenary.nvim&#34;</span><span class="p">,</span>
      <span class="s2">&#34;nvim-treesitter/nvim-treesitter&#34;</span><span class="p">,</span>
    <span class="p">}</span>
  <span class="p">}</span>
  <span class="p">[</span><span class="o">...</span><span class="p">]</span>
<span class="k">end</span><span class="p">)</span>
<span class="p">[</span><span class="o">...</span><span class="p">]</span>
<span class="c1">-- --------------------------------</span>
<span class="c1">-- BEG: Codecompanion configuration</span>
<span class="c1">-- --------------------------------</span>
<span class="c1">-- Module setup</span>
<span class="kd">local</span> <span class="n">codecompanion</span> <span class="o">=</span> <span class="nb">require</span><span class="p">(</span><span class="s1">&#39;codecompanion&#39;</span><span class="p">).</span><span class="n">setup</span><span class="p">({</span>
  <span class="n">adapters</span> <span class="o">=</span> <span class="p">{</span>
    <span class="n">ollama</span> <span class="o">=</span> <span class="k">function</span><span class="p">()</span>
      <span class="k">return</span> <span class="nb">require</span><span class="p">(</span><span class="s1">&#39;codecompanion.adapters&#39;</span><span class="p">).</span><span class="n">extend</span><span class="p">(</span><span class="s1">&#39;ollama&#39;</span><span class="p">,</span> <span class="p">{</span>
        <span class="n">schema</span> <span class="o">=</span> <span class="p">{</span>
          <span class="n">model</span> <span class="o">=</span> <span class="p">{</span>
            <span class="n">default</span> <span class="o">=</span> <span class="s1">&#39;deepseek-coder-v2:latest&#39;</span><span class="p">,</span>
          <span class="p">}</span>
        <span class="p">},</span>
      <span class="p">})</span>
    <span class="k">end</span><span class="p">,</span>
  <span class="p">},</span>
  <span class="n">strategies</span> <span class="o">=</span> <span class="p">{</span>
    <span class="n">chat</span> <span class="o">=</span> <span class="p">{</span> <span class="n">adapter</span> <span class="o">=</span> <span class="s1">&#39;ollama&#39;</span><span class="p">,</span> <span class="p">},</span>
    <span class="n">inline</span> <span class="o">=</span> <span class="p">{</span> <span class="n">adapter</span> <span class="o">=</span> <span class="s1">&#39;ollama&#39;</span><span class="p">,</span> <span class="p">},</span>
  <span class="p">},</span>
<span class="p">})</span>
<span class="c1">-- --------------------------------</span>
<span class="c1">-- END: Codecompanion configuration</span>
<span class="c1">-- --------------------------------</span></code></pre></div>
<p>I’ve tested it a little bit and it seems to work fine, but I’ll have to test it more to see if it is really useful, I’ll
try to do it on future projects.</p></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>At a personal level I don’t like nor trust AI systems, but as long as they are treated as <strong>tools</strong> and not as a magical
thing you must trust they have their uses and I’m happy to see that open source tools like <strong>Ollama</strong> and models like
<strong>DeepSeek</strong> available for everyone to use.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Running a Debian Sid on Ubuntu</title>
      <link>https://blogops.mixinet.net/posts/incus/</link>
      <pubDate>Mon, 27 Jan 2025 13:13:13 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/incus/</guid>
      <description>&lt;p&gt;Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main
machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don’t
believe it is really necessary or justified (I don’t need support, I don’t provide support to others and I usually test
my shell scripts on multiple systems if needed anyway).&lt;/p&gt;
&lt;p&gt;Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old
Dell XPS 13) was stolen from him a year ago.&lt;/p&gt;
&lt;p&gt;I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS),
but I don’t have a Debian Sid machine anymore and while I could reinstall my work machine, I’ve decided I’m going to try
to use a system container to run Debian Sid on it.&lt;/p&gt;
&lt;p&gt;As I want to use a container instead of a VM I’ve narrowed my options to &lt;code&gt;lxc&lt;/code&gt; or &lt;code&gt;systemd-nspawn&lt;/code&gt; (I have &lt;code&gt;docker&lt;/code&gt; and
&lt;code&gt;podman&lt;/code&gt; installed, but I don’t believe they are good options for running system containers).&lt;/p&gt;
&lt;p&gt;As I will want to take snapshots of the container filesystem I’ve decided to try
&lt;a href=&#34;https://linuxcontainers.org/incus/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;incus&lt;/a&gt; instead of &lt;a href=&#34;https://wiki.debian.org/nspawn&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;systemd-nspawn&lt;/a&gt; (I already have
experience with it and while it works well it has less features than &lt;code&gt;incus&lt;/code&gt;).&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_installing_incus&#34;&gt;Installing incus&lt;/h2&gt;&lt;p&gt;As this is a personal system where I want to try things, instead of using the packages included with Ubuntu I’ve decided
to install the ones from the &lt;a href=&#34;https://github.com/zabbly/incus?tab=readme-ov-file#stable-repository&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;zabbly incus stable
repository&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To do it I’ve executed the following as &lt;code&gt;root&lt;/code&gt;:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;c&#34;&gt;# Get the zabbly repository GPG key&lt;/span&gt;
curl &lt;span class=&#34;nt&#34;&gt;-fsSL&lt;/span&gt; https://pkgs.zabbly.com/key.asc &lt;span class=&#34;nt&#34;&gt;-o&lt;/span&gt; /etc/apt/keyrings/zabbly.asc
&lt;span class=&#34;c&#34;&gt;# Create the zabbly-incus-stable.sources file&lt;/span&gt;
sh &lt;span class=&#34;nt&#34;&gt;-c&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;cat &amp;lt;&amp;lt;EOF &amp;gt; /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release &amp;amp;&amp;amp; echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF&amp;#39;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>Although I am a Debian Developer (not very active, BTW) I am using Ubuntu LTS (right now version 24.04.1) on my main
machine; it is my work laptop and I was told to keep using Ubuntu on it when it was assigned to me, although I don’t
believe it is really necessary or justified (I don’t need support, I don’t provide support to others and I usually test
my shell scripts on multiple systems if needed anyway).</p>
<p>Initially I kept using Debian Sid on my personal laptop, but I gave it to my oldest son as the one he was using (an old
Dell XPS 13) was stolen from him a year ago.</p>
<p>I am still using Debian stable on my servers (one at home that also runs LXC containers and another one on an OVH VPS),
but I don’t have a Debian Sid machine anymore and while I could reinstall my work machine, I’ve decided I’m going to try
to use a system container to run Debian Sid on it.</p>
<p>As I want to use a container instead of a VM I’ve narrowed my options to <code>lxc</code> or <code>systemd-nspawn</code> (I have <code>docker</code> and
<code>podman</code> installed, but I don’t believe they are good options for running system containers).</p>
<p>As I will want to take snapshots of the container filesystem I’ve decided to try
<a href="https://linuxcontainers.org/incus/" target="_blank" rel="noopener">incus</a> instead of <a href="https://wiki.debian.org/nspawn" target="_blank" rel="noopener">systemd-nspawn</a> (I already have
experience with it and while it works well it has less features than <code>incus</code>).</p>
<section class="doc-section level-1"><h2 id="_installing_incus">Installing incus</h2><p>As this is a personal system where I want to try things, instead of using the packages included with Ubuntu I’ve decided
to install the ones from the <a href="https://github.com/zabbly/incus?tab=readme-ov-file#stable-repository" target="_blank" rel="noopener">zabbly incus stable
repository</a>.</p>
<p>To do it I’ve executed the following as <code>root</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c"># Get the zabbly repository GPG key</span>
curl <span class="nt">-fsSL</span> https://pkgs.zabbly.com/key.asc <span class="nt">-o</span> /etc/apt/keyrings/zabbly.asc
<span class="c"># Create the zabbly-incus-stable.sources file</span>
sh <span class="nt">-c</span> <span class="s1">&#39;cat &lt;&lt;EOF &gt; /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release &amp;&amp; echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF&#39;</span></code></pre></div>
<p>Initially I only plan to use the command line tools, so I’ve installed the <code>incus</code> and the <code>incus-extra</code> packages, but
once things work I’ll probably install the <code>incus-ui-canonical</code> package too, at least for testing it:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">apt update
apt <span class="nb">install </span>incus incus-extra</code></pre></div></section>
<section class="doc-section level-1"><h2 id="_adding_my_personal_user_to_the_incus_admin_group">Adding my personal user to the <code>incus-admin</code> group</h2><p>To be able to run <code>incus</code> commands as my personal user I’ve added it to the <code>incus-admin</code> group:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">sudo </span>adduser <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-un</span><span class="si">)</span><span class="s2">&#34;</span> incus-admin</code></pre></div>
<p>And I’ve logged out and in again of my desktop session to make the changes effective.</p></section>
<section class="doc-section level-1"><h2 id="_initializing_the_incus_environment">Initializing the incus environment</h2><p>To configure the incus environment I’ve executed the <code>incus admin init</code> command and
accepted the defaults for all the questions, as they are good enough for my current use case.</p></section>
<section class="doc-section level-1"><h2 id="_creating_a_debian_container">Creating a Debian container</h2><p>To create a Debian container I’ve used the default <code>debian/trixie</code> image:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus launch images:debian/trixie debian</code></pre></div>
<p>This command downloads the image and creates a container named <code>debian</code> using the <code>default</code> profile.</p>
<p>The <code>exec</code> command can be used to run a <code>root</code> login shell inside the container:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus <span class="nb">exec </span>debian <span class="nt">--</span> su <span class="nt">-l</span></code></pre></div>
<p>Instead of <code>exec</code> we can use the <code>shell</code> alias:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus shell debian</code></pre></div>
<p>which does the same as the previous command.</p>
<p>Inside that shell we can try to update the machine to <code>sid</code> changing the <code>/etc/apt/sources.list</code> file and using <code>apt</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">root@debian:~# <span class="nb">echo</span> <span class="s2">&#34;deb http://deb.debian.org/debian sid main contrib non-free&#34;</span> <span class="se">\</span>
  <span class="o">&gt;</span>/etc/apt/sources.list
root@debian:~# apt update
root@debian:~# apt dist-upgrade</code></pre></div>
<p>As my machine has <code>docker</code> installed the <code>apt update</code> command fails because the network does not work, to fix it I’ve
executed the commands of the following section and re-run the <code>apt update</code> and <code>apt dist-upgrade</code> commands.</p></section>
<section class="doc-section level-1"><h2 id="_making_the_incusbr0_bridge_work_with_docker">Making the incusbr0 bridge work with Docker</h2><p>To avoid problems with <code>docker</code> networking we have to add rules for the <code>incusbr0</code> bridge to the <code>DOCKER-USER</code> chain as
follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">sudo </span>iptables <span class="nt">-I</span> DOCKER-USER <span class="nt">-i</span> incusbr0 <span class="nt">-j</span> ACCEPT
<span class="nb">sudo </span>iptables <span class="nt">-I</span> DOCKER-USER <span class="nt">-o</span> incusbr0 <span class="nt">-m</span> conntrack <span class="se">\</span>
  <span class="nt">--ctstate</span> RELATED,ESTABLISHED <span class="nt">-j</span> ACCEPT</code></pre></div>
<p>That makes things work now, but to make things persistent across reboots we need to add them each time the machine
boots.</p>
<p>As suggested by the <code>incus</code> documentation I’ve installed the <code>iptables-persistent</code> package (my command also purges the
<code>ufw</code> package, as I was not using it) and saved the current rules when installing:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">sudo </span>apt <span class="nb">install </span>iptables-persistent <span class="nt">--purge</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_integrating_the_dns_resolution_of_the_container_with_the_host">Integrating the DNS resolution of the container with the host</h2><p>To make DNS resolution for the ictus containers work from the host I’ve followed the
<a href="https://linuxcontainers.org/incus/docs/main/howto/network_bridge_resolved/#network-bridge-resolved" target="_blank" rel="noopener">incus documentation</a>.</p>
<p>To set up things manually I’ve run the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">br</span><span class="o">=</span><span class="s2">&#34;incusbr0&#34;</span><span class="p">;</span>
<span class="nv">br_ipv4</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>incus network get <span class="s2">&#34;</span><span class="nv">$br</span><span class="s2">&#34;</span> ipv4.address<span class="si">)</span><span class="s2">&#34;</span><span class="p">;</span>
<span class="nv">br_domain</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>incus network get <span class="s2">&#34;</span><span class="nv">$br</span><span class="s2">&#34;</span> dns.domain<span class="si">)</span><span class="s2">&#34;</span><span class="p">;</span>
<span class="nv">dns_address</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">br_ipv4</span><span class="p">%/*</span><span class="k">}</span><span class="s2">&#34;</span><span class="p">;</span>
<span class="nv">dns_domain</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">br_domain</span>:<span class="p">=incus</span><span class="k">}</span><span class="s2">&#34;</span><span class="p">;</span>
resolvectl dns <span class="s2">&#34;</span><span class="nv">$br</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">dns_address</span><span class="k">}</span><span class="s2">&#34;</span><span class="p">;</span>
resolvectl domain <span class="s2">&#34;</span><span class="nv">$br</span><span class="s2">&#34;</span> <span class="s2">&#34;~</span><span class="k">${</span><span class="nv">dns_domain</span><span class="k">}</span><span class="s2">&#34;</span><span class="p">;</span>
resolvectl dnssec <span class="s2">&#34;</span><span class="nv">$br</span><span class="s2">&#34;</span> off<span class="p">;</span>
resolvectl dnsovertls <span class="s2">&#34;</span><span class="nv">$br</span><span class="s2">&#34;</span> off<span class="p">;</span></code></pre></div>
<p>And to make the changes persistent across reboots I’ve created the following service file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">sh <span class="nt">-c</span> <span class="s2">&#34;cat &lt;&lt;EOF | sudo tee /etc/systemd/system/incus-dns-</span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2">.service
[Unit]
Description=Incus per-link DNS configuration for </span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2">
BindsTo=sys-subsystem-net-devices-</span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2">.device
After=sys-subsystem-net-devices-</span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2">.device

[Service]
Type=oneshot
ExecStart=/usr/bin/resolvectl dns </span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2"> </span><span class="k">${</span><span class="nv">dns_address</span><span class="k">}</span><span class="s2">
ExecStart=/usr/bin/resolvectl domain </span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2"> ~</span><span class="k">${</span><span class="nv">dns_domain</span><span class="k">}</span><span class="s2">
ExecStart=/usr/bin/resolvectl dnssec </span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2"> off
ExecStart=/usr/bin/resolvectl dnsovertls </span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2"> off
ExecStopPost=/usr/bin/resolvectl revert </span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2">
RemainAfterExit=yes

[Install]
WantedBy=sys-subsystem-net-devices-</span><span class="k">${</span><span class="nv">br</span><span class="k">}</span><span class="s2">.device
EOF&#34;</span></code></pre></div>
<p>And enabled it:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">sudo </span>systemctl daemon-reload
<span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="nt">--now</span> incus-dns-<span class="k">${</span><span class="nv">br</span><span class="k">}</span>.service</code></pre></div>
<p>If all goes well the DNS resolution works from the host:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>host debian.incus
<span class="go">debian.incus has address 10.149.225.121
debian.incus has IPv6 address fd42:1178:afd8:cc2c:216:3eff:fe2b:5cea</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_using_my_host_user_and_home_dir_inside_the_container">Using my host user and home dir inside the container</h2><p>To use my host user and home directory inside the container I need to add the user and group to the container.</p>
<p>First I’ve added my user group with the same GID used on the host:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus <span class="nb">exec </span>debian <span class="nt">--</span> addgroup <span class="nt">--gid</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--group</span><span class="si">)</span><span class="s2">&#34;</span> <span class="nt">--allow-bad-names</span> <span class="se">\</span>
  <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--group</span> <span class="nt">--name</span><span class="si">)</span><span class="s2">&#34;</span></code></pre></div>
<p>Once I have the group I’ve added the user with the same UID and GID as on the host, without defining a password for it:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus <span class="nb">exec </span>debian <span class="nt">--</span> adduser <span class="nt">--uid</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--user</span><span class="si">)</span><span class="s2">&#34;</span> <span class="nt">--gid</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--group</span><span class="si">)</span><span class="s2">&#34;</span> <span class="se">\</span>
  <span class="nt">--comment</span> <span class="s2">&#34;</span><span class="si">$(</span>getent passwd <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--user</span> <span class="nt">-name</span><span class="si">)</span><span class="s2">&#34;</span> | <span class="nb">cut</span> <span class="nt">-d</span> <span class="s1">&#39;:&#39;</span> <span class="nt">-f</span> 5<span class="si">)</span><span class="s2">&#34;</span> <span class="se">\</span>
  <span class="nt">--no-create-home</span> <span class="nt">--disabled-password</span> <span class="nt">--allow-bad-names</span> <span class="se">\</span>
  <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--user</span> <span class="nt">--name</span><span class="si">)</span><span class="s2">&#34;</span></code></pre></div>
<p>Once the user is created we can mount the home directory on the container (we add the <code>shift</code> option to make the
container use the same UID and GID as we do on the host):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus config device add debian home disk <span class="nb">source</span><span class="o">=</span><span class="nv">$HOME</span> <span class="nv">path</span><span class="o">=</span><span class="nv">$HOME</span> <span class="nb">shift</span><span class="o">=</span><span class="nb">true</span></code></pre></div>
<p>We have the <code>shell</code> alias to log with the <code>root</code> account, now we can add another one to log into the container using the
newly created user:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus <span class="nb">alias </span>add ush <span class="s2">&#34;exec @ARGS@ -- su -l </span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--user</span> <span class="nt">--name</span><span class="si">)</span><span class="s2">&#34;</span></code></pre></div>
<p>To log into the container as our user now we just need to run:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus ush debian</code></pre></div>
<p>To be able to use <code>sudo</code> inside the container we could add our user to the <code>sudo</code> group:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus <span class="nb">exec </span>debian <span class="nt">--</span> adduser <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--user</span> <span class="nt">--name</span><span class="si">)</span><span class="s2">&#34;</span> <span class="s2">&#34;sudo&#34;</span></code></pre></div>
<p>But that requires a password and we don’t have one, so instead we are going to add a file to the <code>/etc/sudoers.d</code>
directory to allow our user to run <code>sudo</code> without a password:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus <span class="nb">exec </span>debian <span class="nt">--</span> <span class="se">\</span>
  sh <span class="nt">-c</span> <span class="s2">&#34;echo &#39;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">--user</span> <span class="nt">--name</span><span class="si">)</span><span class="s2"> ALL = NOPASSWD: ALL&#39; /etc/sudoers.d/user&#34;</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_accessing_the_container_using_ssh">Accessing the container using ssh</h2><p>To use the container as a real machine and log into it as I do on remote machines I’ve installed the <code>openssh-server</code>
and authorized my laptop public key to log into my laptop (as we are mounting the home directory from the host that
allows us to log in without password from the local machine).</p>
<p>Also, to be able to run X11 applications from the container I’ve adusted the <code>$HOME/.ssh/config</code> file to always forward
X11 (option <code>ForwardX11 yes</code> for <code>Host debian.incus</code>) and installed the <code>xauth</code> package.</p>
<p>After that I can log into the container running the command <code>ssh debian.incus</code> and start using it after installing other
interesting tools like <code>neovim</code>, <code>rsync</code>, <code>tmux</code>, etc.</p></section>
<section class="doc-section level-1"><h2 id="_taking_snapshots_of_the_container">Taking snapshots of the container</h2><p>As this is a system container we can take snapshots of it using the <code>incus snapshot</code> command; that can be specially
useful to take snapshots before doing a <code>dist-upgrade</code> so we can rollback if something goes wrong.</p>
<p>To work with container snapshots we run use the <code>incus snapshot</code> command, i.e. to create a snapshot we use de <code>create</code>
subcommand:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">incus snapshot create debian</code></pre></div>
<p>The <code>snapshot</code> sub commands include options to list the available snapshots, restore a snapshot, delete a snapshot, etc.</p></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>Since last week I have a terminal running a <code>tmux</code> session on the Debian Sid container with multiple <code>zsh</code> windows open
(I’ve changed the prompt to be able to notice easily where I am) and it is working as expected.</p>
<p>My plan now is to add some packages and use the container for personal projects so I can work on a Debian Sid system
without having to reinstall my work machine.</p>
<p>I’ll probably write more about it in the future, but for now, I’m happy with the results.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Ghostty Terminal Emulator</title>
      <link>https://blogops.mixinet.net/posts/user_tools/ghostty_terminal_emulator/</link>
      <pubDate>Thu, 23 Jan 2025 11:00:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/user_tools/ghostty_terminal_emulator/</guid>
      <description>&lt;p&gt;For a long time I’ve been using the &lt;a href=&#34;https://gnome-terminator.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Terminator&lt;/a&gt; terminal emulator on Linux machines, but
last week I read a &lt;a href=&#34;https://lwn.net/Articles/1004377/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;LWN article&lt;/a&gt; about a new emulator called
&lt;a href=&#34;https://ghostty.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Ghostty&lt;/a&gt; that looked interesting and I decided to give it a try.&lt;/p&gt;
&lt;p&gt;The author sells it as a fast, feature-rich and cross-platform terminal emulator that follows the &lt;em&gt;zero configuration
philosophy&lt;/em&gt;.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_installation_and_configuration&#34;&gt;Installation and configuration&lt;/h2&gt;&lt;p&gt;I installed the debian package for Ubuntu 24.04 from the &lt;a href=&#34;https://github.com/mkasberg/ghostty-ubuntu&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;ghostty-ubuntu
project&lt;/a&gt; and started playing with it.&lt;/p&gt;
&lt;p&gt;The first thing I noticed is that the zero configuration part is true; I was able to use the terminal without a
configuration file, although I created one to change the theme and the font size, but other than that it worked OK for
me; my &lt;code&gt;$HOME/.config/ghostty/config&lt;/code&gt; file is as simple as:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;toml&#34;&gt;&lt;span class=&#34;py&#34;&gt;font-size&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;14&lt;/span&gt;
&lt;span class=&#34;py&#34;&gt;theme&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;err&#34;&gt;/usr/share/ghostty/themes/iTerm&lt;/span&gt;&lt;span class=&#34;mi&#34;&gt;2&lt;/span&gt; &lt;span class=&#34;err&#34;&gt;Solarized&lt;/span&gt; &lt;span class=&#34;err&#34;&gt;Light&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>For a long time I’ve been using the <a href="https://gnome-terminator.org/" target="_blank" rel="noopener">Terminator</a> terminal emulator on Linux machines, but
last week I read a <a href="https://lwn.net/Articles/1004377/" target="_blank" rel="noopener">LWN article</a> about a new emulator called
<a href="https://ghostty.org/" target="_blank" rel="noopener">Ghostty</a> that looked interesting and I decided to give it a try.</p>
<p>The author sells it as a fast, feature-rich and cross-platform terminal emulator that follows the <em>zero configuration
philosophy</em>.</p>
<section class="doc-section level-1"><h2 id="_installation_and_configuration">Installation and configuration</h2><p>I installed the debian package for Ubuntu 24.04 from the <a href="https://github.com/mkasberg/ghostty-ubuntu" target="_blank" rel="noopener">ghostty-ubuntu
project</a> and started playing with it.</p>
<p>The first thing I noticed is that the zero configuration part is true; I was able to use the terminal without a
configuration file, although I created one to change the theme and the font size, but other than that it worked OK for
me; my <code>$HOME/.config/ghostty/config</code> file is as simple as:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="toml"><span class="py">font-size</span><span class="p">=</span><span class="mi">14</span>
<span class="py">theme</span><span class="p">=</span><span class="err">/usr/share/ghostty/themes/iTerm</span><span class="mi">2</span> <span class="err">Solarized</span> <span class="err">Light</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_starting_the_terminal_maximized">Starting the terminal maximized</h2><p>After playing a little bit with the terminal I was turned off by the fact that there was no option to start it
maximized, but is seemed to me that someone should have asked for the feature, or, if not, I could ask for it.</p>
<p>I did a quick search on the project and I found out that there was a merged
<a href="https://github.com/ghostty-org/ghostty/pull/4936" target="_blank" rel="noopener">PR</a> that added the option, so I downloaded the source code, installed
<a href="https://github.com/ziglang/zig/wiki/Install-Zig-from-a-Package-Manager#ubuntu-snap" target="_blank" rel="noopener">Zig</a> and
<a href="https://ghostty.org/docs/install/build" target="_blank" rel="noopener">built</a> the program on my machine.</p>
<p>As the change is going to be included on the next version on the package I replaced the binary with my version and
started playing with the terminal.</p></section>
<section class="doc-section level-1"><h2 id="_accessing_remote_machines">Accessing remote machines</h2><p>The first thing I noticed was that when logging into remote machines using <code>ssh</code> the terminal variable was not known,
but on the help section of the project documentation there was an entry about how to fix
<a href="https://ghostty.org/docs/help/terminfo#ssh" target="_blank" rel="noopener">it</a> copying the terminfo configuration to remote machines, it is as simple
as running the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">infocmp <span class="nt">-x</span> | ssh YOUR-SERVER <span class="nt">--</span> tic <span class="nt">-x</span> -</code></pre></div></section>
<section class="doc-section level-1"><h2 id="_dead_keys_on_ubuntu">Dead keys on Ubuntu</h2><p>With that sorted out everything looked good to me until I tried to add an accented character when editing a file and the
terminal stopped working.</p>
<p>Again, I looked at the project issues and found <a href="https://github.com/ghostty-org/ghostty/issues/3567" target="_blank" rel="noopener">one</a> that matched
what was happening to me, and it remembered me about one of the best things about actively maintained open source
software.</p>
<p>It turns out that the issue is related to a <a href="https://github.com/ibus/ibus/issues/2637" target="_blank" rel="noopener">bug on ibus</a>, but other terminals
were working right, so the <code>ghostty</code> developer was already working on a fix on the way the terminal handles the keyboard
input on GTK, so I subscribed to the issue and stopped using <code>ghostty</code> until there was something new to try again (I use
an Spanish keyboard map and I can’t use a terminal that does not support dead keys).</p>
<p>Yesterday I saw some messages about things being almost fixed, so I pulled the latest changes on my cloned repository,
compiled it and writing accented characters works now, there is a small issue with the cursor (the dead key pressed
is left on the block cursor unless you change the window focus), but that is something manageable for me.</p></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>I think that <code>ghostty</code> is a good terminal emulator and I’m going to keep using it on my laptop unless I find something
annoying that I can’t work with (i hope that the cursor issue will be fixed soon and I can live with it as the only
thing I need to do to recover from it is changing the widow focus, and that can be done really quickly using keyboard
shortcuts).</p>
<p>As it is actively maintained and the developer seems to be quite active I don’t expect problems and is nice to play
with new things from time to time.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Command line tools to process templates</title>
      <link>https://blogops.mixinet.net/posts/user_tools/cli_template_tools/</link>
      <pubDate>Thu, 16 Jan 2025 11:34:50 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/user_tools/cli_template_tools/</guid>
      <description>&lt;p&gt;I’ve always been a fan of template engines that work with text files, mainly to work with static site generators, but
also to generate code, configuration files, and other text-based files.&lt;/p&gt;
&lt;p&gt;For my own web projects I used to go with &lt;a href=&#34;https://jinja.palletsprojects.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Jinja2&lt;/a&gt;, as all my projects were written
in &lt;a href=&#34;https://python.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Python&lt;/a&gt;, while for static web sites I used the template engines included with the tools I was
using, i.e. &lt;a href=&#34;https://shopify.github.io/liquid/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Liquid&lt;/a&gt; with &lt;a href=&#34;https://jekyllrb.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Jekyll&lt;/a&gt; and
&lt;a href=&#34;https://gohugo.io/templates/introduction/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Go Templates&lt;/a&gt; (based on the &lt;a href=&#34;https://pkg.go.dev/text/template&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;text/template&lt;/a&gt;
and the &lt;a href=&#34;https://pkg.go.dev/html/template&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;html/template&lt;/a&gt; go packages) for &lt;a href=&#34;https://gohugo.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Hugo&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When I needed to generate code snippets or configuration files from shell scripts I used to go with
&lt;a href=&#34;https://man7.org/linux/man-pages/man1/sed.1.html&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;sed&lt;/a&gt; and/or
&lt;a href=&#34;https://man7.org/linux/man-pages/man1/envsubst.1.html&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;envsubst&lt;/a&gt;, but lately things got complicated and I started to use
a command line application called &lt;a href=&#34;https://github.com/sto/tmpl&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;tmpl&lt;/a&gt; that uses the &lt;a href=&#34;https://pkg.go.dev/text/template&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Go
Template Language&lt;/a&gt; with functions from the &lt;a href=&#34;https://masterminds.github.io/sprig/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Sprig&lt;/a&gt; library.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_tmpl&#34;&gt;tmpl&lt;/h2&gt;&lt;p&gt;I’ve been using my fork of the &lt;a href=&#34;https://github.com/sto/tmpl&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;tmpl&lt;/a&gt; program to process templates on CI/CD pipelines
(&lt;code&gt;gitlab-ci&lt;/code&gt;) to generate configuration files and code snippets because it uses the same syntax used by
&lt;a href=&#34;https://helm.sh&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;helm&lt;/a&gt; (easier to use by other &lt;strong&gt;DevOps&lt;/strong&gt; already familiar with the format) and the binary is small and
can be easily included into the docker images used by the pipeline jobs.&lt;/p&gt;
&lt;p&gt;One interesting feature of the &lt;code&gt;tmpl&lt;/code&gt; tool is that it can read values from command line arguments and from multiple
files in different formats (YAML, JSON, TOML, etc) and merge them into a single object that can be used to render the
templates.&lt;/p&gt;
&lt;p&gt;There are alternatives to the &lt;code&gt;tmpl&lt;/code&gt; tool and I’ve looked at them (i.e. simple ones like
&lt;a href=&#34;https://github.com/bluebrown/go-template-cli&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;go-template-cli&lt;/a&gt; or complex ones like
&lt;a href=&#34;https://docs.gomplate.ca/&#34;&gt;gomplate&lt;/a&gt;), but I haven’t found one that fits my needs.&lt;/p&gt;
&lt;p&gt;For my next project I plan to evaluate a move to a different tool or template format, as &lt;code&gt;tmpl&lt;/code&gt; is not being actively
maintained (as I said, I’m using my own fork) and it is not included on existing GNU/Linux distributions (I packaged it
for &lt;code&gt;Debian&lt;/code&gt; and &lt;code&gt;Alpine&lt;/code&gt;, but I don’t want to maintain something like that without an active community and I’m not
interested in being the upstream myself, as I’m trying to move to &lt;a href=&#34;https://www.rust-lang.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Rust&lt;/a&gt; instead of
&lt;a href=&#34;https://go.dev/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Go&lt;/a&gt; as the compiled programming language for my projects).&lt;/p&gt;&lt;/section&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_mini_jinja&#34;&gt;Mini Jinja&lt;/h2&gt;&lt;p&gt;Looking for alternate tools to process templates on the command line I found the &lt;a href=&#34;https://docs.rs/minijinja/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;minijinja&lt;/a&gt;
rust crate, a minimal implementation of the &lt;strong&gt;Jinja2&lt;/strong&gt; template engine that also includes a small command line utility
(&lt;a href=&#34;https://docs.rs/crate/minijinja-cli/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;minijinja-cli&lt;/a&gt;) and I believe I’ll give it a try on the future for various
reasons:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;I’m already familiar with the &lt;strong&gt;Jinja2&lt;/strong&gt; syntax and it is widely used on the industry.&lt;/li&gt;&lt;li&gt;On my code I can use the original &lt;strong&gt;Jinja2&lt;/strong&gt; module for &lt;strong&gt;Python&lt;/strong&gt; projects and &lt;strong&gt;MiniJinja&lt;/strong&gt; for &lt;strong&gt;Rust&lt;/strong&gt; programs.&lt;/li&gt;&lt;li&gt;The included command line utility is small and easy to use, and the binaries distributed by the project are good
enough to add them to the docker container images used by CI/CD pipelines.&lt;/li&gt;&lt;li&gt;As I want to move to &lt;strong&gt;Rust&lt;/strong&gt; I can try to add functionalities to the existing command line client or create my own
version of it if they are needed (don’t think so, but who knows).&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>I’ve always been a fan of template engines that work with text files, mainly to work with static site generators, but
also to generate code, configuration files, and other text-based files.</p>
<p>For my own web projects I used to go with <a href="https://jinja.palletsprojects.com/" target="_blank" rel="noopener">Jinja2</a>, as all my projects were written
in <a href="https://python.org/" target="_blank" rel="noopener">Python</a>, while for static web sites I used the template engines included with the tools I was
using, i.e. <a href="https://shopify.github.io/liquid/" target="_blank" rel="noopener">Liquid</a> with <a href="https://jekyllrb.com/" target="_blank" rel="noopener">Jekyll</a> and
<a href="https://gohugo.io/templates/introduction/" target="_blank" rel="noopener">Go Templates</a> (based on the <a href="https://pkg.go.dev/text/template" target="_blank" rel="noopener">text/template</a>
and the <a href="https://pkg.go.dev/html/template" target="_blank" rel="noopener">html/template</a> go packages) for <a href="https://gohugo.io/" target="_blank" rel="noopener">Hugo</a>.</p>
<p>When I needed to generate code snippets or configuration files from shell scripts I used to go with
<a href="https://man7.org/linux/man-pages/man1/sed.1.html" target="_blank" rel="noopener">sed</a> and/or
<a href="https://man7.org/linux/man-pages/man1/envsubst.1.html" target="_blank" rel="noopener">envsubst</a>, but lately things got complicated and I started to use
a command line application called <a href="https://github.com/sto/tmpl" target="_blank" rel="noopener">tmpl</a> that uses the <a href="https://pkg.go.dev/text/template" target="_blank" rel="noopener">Go
Template Language</a> with functions from the <a href="https://masterminds.github.io/sprig/" target="_blank" rel="noopener">Sprig</a> library.</p>
<section class="doc-section level-1"><h2 id="_tmpl">tmpl</h2><p>I’ve been using my fork of the <a href="https://github.com/sto/tmpl" target="_blank" rel="noopener">tmpl</a> program to process templates on CI/CD pipelines
(<code>gitlab-ci</code>) to generate configuration files and code snippets because it uses the same syntax used by
<a href="https://helm.sh" target="_blank" rel="noopener">helm</a> (easier to use by other <strong>DevOps</strong> already familiar with the format) and the binary is small and
can be easily included into the docker images used by the pipeline jobs.</p>
<p>One interesting feature of the <code>tmpl</code> tool is that it can read values from command line arguments and from multiple
files in different formats (YAML, JSON, TOML, etc) and merge them into a single object that can be used to render the
templates.</p>
<p>There are alternatives to the <code>tmpl</code> tool and I’ve looked at them (i.e. simple ones like
<a href="https://github.com/bluebrown/go-template-cli" target="_blank" rel="noopener">go-template-cli</a> or complex ones like
<a href="https://docs.gomplate.ca/">gomplate</a>), but I haven’t found one that fits my needs.</p>
<p>For my next project I plan to evaluate a move to a different tool or template format, as <code>tmpl</code> is not being actively
maintained (as I said, I’m using my own fork) and it is not included on existing GNU/Linux distributions (I packaged it
for <code>Debian</code> and <code>Alpine</code>, but I don’t want to maintain something like that without an active community and I’m not
interested in being the upstream myself, as I’m trying to move to <a href="https://www.rust-lang.org/" target="_blank" rel="noopener">Rust</a> instead of
<a href="https://go.dev/" target="_blank" rel="noopener">Go</a> as the compiled programming language for my projects).</p></section>
<section class="doc-section level-1"><h2 id="_mini_jinja">Mini Jinja</h2><p>Looking for alternate tools to process templates on the command line I found the <a href="https://docs.rs/minijinja/" target="_blank" rel="noopener">minijinja</a>
rust crate, a minimal implementation of the <strong>Jinja2</strong> template engine that also includes a small command line utility
(<a href="https://docs.rs/crate/minijinja-cli/" target="_blank" rel="noopener">minijinja-cli</a>) and I believe I’ll give it a try on the future for various
reasons:</p>
<div class="ulist"><ul><li>I’m already familiar with the <strong>Jinja2</strong> syntax and it is widely used on the industry.</li><li>On my code I can use the original <strong>Jinja2</strong> module for <strong>Python</strong> projects and <strong>MiniJinja</strong> for <strong>Rust</strong> programs.</li><li>The included command line utility is small and easy to use, and the binaries distributed by the project are good
enough to add them to the docker container images used by CI/CD pipelines.</li><li>As I want to move to <strong>Rust</strong> I can try to add functionalities to the existing command line client or create my own
version of it if they are needed (don’t think so, but who knows).</li></ul></div></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Testing New User Tools</title>
      <link>https://blogops.mixinet.net/posts/user_tools/testing_new_user_tools/</link>
      <pubDate>Fri, 10 Jan 2025 14:35:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/user_tools/testing_new_user_tools/</guid>
      <description>&lt;p&gt;On recent weeks I’ve had some time to &lt;a href=&#34;https://en.m.wiktionary.org/wiki/scratch_one%27s_own_itch&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;scratch my own itch&lt;/a&gt; on
matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice.&lt;/p&gt;
&lt;p&gt;This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_desktop_wm&#34;&gt;Desktop / WM&lt;/h2&gt;&lt;p&gt;On the desktop / window manager front I’ve been using &lt;a href=&#34;https://projects.linuxmint.com/cinnamon/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Cinnamon&lt;/a&gt; on &lt;strong&gt;Debian&lt;/strong&gt;
and &lt;strong&gt;Ubuntu&lt;/strong&gt; systems since &lt;strong&gt;Gnome 3&lt;/strong&gt; was published (I never liked version 3, so I decided to move to something similar
to &lt;a href=&#34;https://gnome.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Gnome 2&lt;/a&gt;, including the keyboard shortcuts).&lt;/p&gt;
&lt;p&gt;In fact I’ve never been a fan of Desktop environments, before &lt;strong&gt;Gnome&lt;/strong&gt; I used &lt;a href=&#34;https://openbox.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;OpenBox&lt;/a&gt; and
&lt;a href=&#34;https://ice-wm.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;IceWM&lt;/a&gt; because they where a lot faster than desktop systems on my hardware at the time and I was
using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well,
except for the web browsers and the image manipulation programs).&lt;/p&gt;
&lt;p&gt;Although I was comfortable using &lt;strong&gt;Cinnamon&lt;/strong&gt;, some years ago I tried to move to &lt;a href=&#34;https://i3wm.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;i3&lt;/a&gt;, a tilling window
manager for X11 that looked like a good choice for me, but I didn’t have much time to play with it and never used it
enough to make me productive with it (I didn’t prepare a complete configuration nor had enough time to learn the new
shortcuts, so I went back to &lt;strong&gt;Cinnamon&lt;/strong&gt; and never tried again).&lt;/p&gt;
&lt;p&gt;Anyway, some weeks ago I updated my work machine OS (it was using &lt;strong&gt;Ubuntu 22.04 LTS&lt;/strong&gt; and I updated it to the &lt;strong&gt;24.04
LTS&lt;/strong&gt; version) and the &lt;strong&gt;Cinnamon systray applet&lt;/strong&gt; stopped working as it used to do (in fact I still have to restart
&lt;strong&gt;Cinnamon&lt;/strong&gt; after starting a session to make it work) and, as I had some time, I decided to try a tilling window
manager again, but now I decided to go for &lt;a href=&#34;https://swaywm.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;SwayWM&lt;/a&gt;, as it uses
&lt;a href=&#34;https://wayland.freedesktop.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Wayland&lt;/a&gt; instead of &lt;a href=&#34;https://x.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;X11&lt;/a&gt;.&lt;/p&gt;
&lt;section class=&#34;doc-section level-2&#34;&gt;&lt;h3 id=&#34;_sway_configuration&#34;&gt;Sway configuration&lt;/h3&gt;&lt;p&gt;On my &lt;code&gt;~/.config/sway/config&lt;/code&gt; I tuned some things:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;Set &lt;a href=&#34;https://codeberg.org/dnkl/fuzzel&#34;&gt;fuzzel&lt;/a&gt; as the application launcher.&lt;/li&gt;&lt;li&gt;Installed manually the &lt;a href=&#34;https://gitlab.com/w0lff/shikane&#34;&gt;shikane&lt;/a&gt; application and created a configuration to be
executed always when &lt;code&gt;sway&lt;/code&gt; is started / reloaded (I adjusted my configuration with &lt;code&gt;wdisplays&lt;/code&gt; and used &lt;code&gt;shikanectl&lt;/code&gt;
to save it).&lt;/li&gt;&lt;li&gt;Added support for &lt;a href=&#34;https://github.com/emersion/xdg-desktop-portal-wlr/blob/master/README.md#running&#34;&gt;running&lt;/a&gt; the
&lt;code&gt;xdg-desktop-portal-wlr&lt;/code&gt; service.&lt;/li&gt;&lt;li&gt;Enabled the &lt;code&gt;swayidle&lt;/code&gt; command to lock the screen after some time of inactivity.&lt;/li&gt;&lt;li&gt;Adjusted the keyboard to use the &lt;code&gt;es&lt;/code&gt; key map&lt;/li&gt;&lt;li&gt;Added some keybindings to make my life easier, including the use of &lt;code&gt;grimm&lt;/code&gt; and &lt;code&gt;swappy&lt;/code&gt; to take screenshots&lt;/li&gt;&lt;li&gt;Configured &lt;code&gt;waybar&lt;/code&gt; as the environment bar.&lt;/li&gt;&lt;li&gt;&lt;p&gt;Added a shell script to start applications when &lt;code&gt;sway&lt;/code&gt; is started (it uses &lt;code&gt;swaymsg&lt;/code&gt; to execute background commands
and the &lt;a href=&#34;https://gitlab.com/wef/dotfiles/-/blob/master/bin/i3-toolwait&#34;&gt;&lt;code&gt;i3toolwait&lt;/code&gt;&lt;/a&gt; script to wait for the&lt;/p&gt;&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;c&#34;&gt;#!/bin/sh&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# VARIABLES&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;CHROMIUM_LOCAL_STATE&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$HOME&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;/.config/google-chrome/Local State&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$HOME&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;/.config/sway/scripts/i3-toolwait&amp;#34;&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Functions&lt;/span&gt;
chromium_profile_dir&lt;span class=&#34;o&#34;&gt;()&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;{&lt;/span&gt;
  jq &lt;span class=&#34;nt&#34;&gt;-r&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;.profile.info_cache|to_entries|map({(.value.name): .key})|add|.&lt;/span&gt;&lt;span class=&#34;se&#34;&gt;\&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$1&lt;/span&gt;&lt;span class=&#34;se&#34;&gt;\&amp;#34;&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt; // &lt;/span&gt;&lt;span class=&#34;se&#34;&gt;\&amp;#34;\&amp;#34;&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CHROMIUM_LOCAL_STATE&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;o&#34;&gt;}&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# MAIN&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;IGZ_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;$(&lt;/span&gt;chromium_profile_dir &lt;span class=&#34;s2&#34;&gt;&amp;#34;sergio.talens@intelygenz.com&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;OURO_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;$(&lt;/span&gt;chromium_profile_dir &lt;span class=&#34;s2&#34;&gt;&amp;#34;sergio.talens@nxr.global&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;PERSONAL_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;$(&lt;/span&gt;chromium_profile_dir &lt;span class=&#34;s2&#34;&gt;&amp;#34;stalens@gmail.com&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Common programs&lt;/span&gt;
swaymsg &lt;span class=&#34;s2&#34;&gt;&amp;#34;exec nextcloud --background&amp;#34;&lt;/span&gt;
swaymsg &lt;span class=&#34;s2&#34;&gt;&amp;#34;exec nm-applet&amp;#34;&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Run spotify on the first workspace (it is mapped to the laptop screen)&lt;/span&gt;
swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace 1&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;spotify&amp;#34;&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Run tmux on the&lt;/span&gt;
swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace 2&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;--&lt;/span&gt; foot tmux a &lt;span class=&#34;nt&#34;&gt;-dt&lt;/span&gt; sto

&lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;3&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$OURO_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
  &lt;/span&gt;swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-m&lt;/span&gt; ouro-browser &lt;span class=&#34;nt&#34;&gt;--&lt;/span&gt; google-chrome &lt;span class=&#34;nt&#34;&gt;--profile-directory&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$OURO_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$((&lt;/span&gt;wp_num+1&lt;span class=&#34;k&#34;&gt;))&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;fi

if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$IGZ_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
  &lt;/span&gt;swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-m&lt;/span&gt; igz-browser &lt;span class=&#34;nt&#34;&gt;--&lt;/span&gt; google-chrome &lt;span class=&#34;nt&#34;&gt;--profile-directory&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$IGZ_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$((&lt;/span&gt;wp_num+1&lt;span class=&#34;k&#34;&gt;))&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;fi

if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$PERSONAL_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
  &lt;/span&gt;swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-m&lt;/span&gt; personal-browser &lt;span class=&#34;nt&#34;&gt;--&lt;/span&gt; google-chrome &lt;span class=&#34;nt&#34;&gt;--profile-directory&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$PERSONAL_PROFILE_DIR&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$((&lt;/span&gt;wp_num+1&lt;span class=&#34;k&#34;&gt;))&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;fi&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Open the browser without setting the profile directory if none was found&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;3&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
  &lt;/span&gt;swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; google-chrome
  &lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$((&lt;/span&gt;wp_num+1&lt;span class=&#34;k&#34;&gt;))&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;fi
&lt;/span&gt;swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; evolution
&lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$((&lt;/span&gt;wp_num+1&lt;span class=&#34;k&#34;&gt;))&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;

swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; slack
&lt;span class=&#34;nv&#34;&gt;wp_num&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;$((&lt;/span&gt;wp_num+1&lt;span class=&#34;k&#34;&gt;))&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;

&lt;span class=&#34;c&#34;&gt;# Open a private browser and a console in the last workspace&lt;/span&gt;
swaymsg &lt;span class=&#34;nt&#34;&gt;-q&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$wp_num&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;--&lt;/span&gt; google-chrome &lt;span class=&#34;nt&#34;&gt;--incognito&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; foot

&lt;span class=&#34;c&#34;&gt;# Go back to the second workspace for keepassxc&lt;/span&gt;
swaymsg &lt;span class=&#34;s2&#34;&gt;&amp;#34;workspace 2&amp;#34;&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;${&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;I3_TOOLWAIT&lt;/span&gt;&lt;span class=&#34;k&#34;&gt;}&lt;/span&gt; keepassxc&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>On recent weeks I’ve had some time to <a href="https://en.m.wiktionary.org/wiki/scratch_one%27s_own_itch" target="_blank" rel="noopener">scratch my own itch</a> on
matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice.</p>
<p>This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.</p>
<section class="doc-section level-1"><h2 id="_desktop_wm">Desktop / WM</h2><p>On the desktop / window manager front I’ve been using <a href="https://projects.linuxmint.com/cinnamon/" target="_blank" rel="noopener">Cinnamon</a> on <strong>Debian</strong>
and <strong>Ubuntu</strong> systems since <strong>Gnome 3</strong> was published (I never liked version 3, so I decided to move to something similar
to <a href="https://gnome.org/" target="_blank" rel="noopener">Gnome 2</a>, including the keyboard shortcuts).</p>
<p>In fact I’ve never been a fan of Desktop environments, before <strong>Gnome</strong> I used <a href="https://openbox.org/" target="_blank" rel="noopener">OpenBox</a> and
<a href="https://ice-wm.org/" target="_blank" rel="noopener">IceWM</a> because they where a lot faster than desktop systems on my hardware at the time and I was
using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well,
except for the web browsers and the image manipulation programs).</p>
<p>Although I was comfortable using <strong>Cinnamon</strong>, some years ago I tried to move to <a href="https://i3wm.org/" target="_blank" rel="noopener">i3</a>, a tilling window
manager for X11 that looked like a good choice for me, but I didn’t have much time to play with it and never used it
enough to make me productive with it (I didn’t prepare a complete configuration nor had enough time to learn the new
shortcuts, so I went back to <strong>Cinnamon</strong> and never tried again).</p>
<p>Anyway, some weeks ago I updated my work machine OS (it was using <strong>Ubuntu 22.04 LTS</strong> and I updated it to the <strong>24.04
LTS</strong> version) and the <strong>Cinnamon systray applet</strong> stopped working as it used to do (in fact I still have to restart
<strong>Cinnamon</strong> after starting a session to make it work) and, as I had some time, I decided to try a tilling window
manager again, but now I decided to go for <a href="https://swaywm.org/" target="_blank" rel="noopener">SwayWM</a>, as it uses
<a href="https://wayland.freedesktop.org/" target="_blank" rel="noopener">Wayland</a> instead of <a href="https://x.org/" target="_blank" rel="noopener">X11</a>.</p>
<section class="doc-section level-2"><h3 id="_sway_configuration">Sway configuration</h3><p>On my <code>~/.config/sway/config</code> I tuned some things:</p>
<div class="ulist"><ul><li>Set <a href="https://codeberg.org/dnkl/fuzzel">fuzzel</a> as the application launcher.</li><li>Installed manually the <a href="https://gitlab.com/w0lff/shikane">shikane</a> application and created a configuration to be
executed always when <code>sway</code> is started / reloaded (I adjusted my configuration with <code>wdisplays</code> and used <code>shikanectl</code>
to save it).</li><li>Added support for <a href="https://github.com/emersion/xdg-desktop-portal-wlr/blob/master/README.md#running">running</a> the
<code>xdg-desktop-portal-wlr</code> service.</li><li>Enabled the <code>swayidle</code> command to lock the screen after some time of inactivity.</li><li>Adjusted the keyboard to use the <code>es</code> key map</li><li>Added some keybindings to make my life easier, including the use of <code>grimm</code> and <code>swappy</code> to take screenshots</li><li>Configured <code>waybar</code> as the environment bar.</li><li><p>Added a shell script to start applications when <code>sway</code> is started (it uses <code>swaymsg</code> to execute background commands
and the <a href="https://gitlab.com/wef/dotfiles/-/blob/master/bin/i3-toolwait"><code>i3toolwait</code></a> script to wait for the</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="c"># VARIABLES</span>
<span class="nv">CHROMIUM_LOCAL_STATE</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$HOME</span><span class="s2">/.config/google-chrome/Local State&#34;</span>
<span class="nv">I3_TOOLWAIT</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$HOME</span><span class="s2">/.config/sway/scripts/i3-toolwait&#34;</span>

<span class="c"># Functions</span>
chromium_profile_dir<span class="o">()</span> <span class="o">{</span>
  jq <span class="nt">-r</span> <span class="s2">&#34;.profile.info_cache|to_entries|map({(.value.name): .key})|add|.</span><span class="se">\&#34;</span><span class="nv">$1</span><span class="se">\&#34;</span><span class="s2"> // </span><span class="se">\&#34;\&#34;</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$CHROMIUM_LOCAL_STATE</span><span class="s2">&#34;</span>
<span class="o">}</span>

<span class="c"># MAIN</span>
<span class="nv">IGZ_PROFILE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>chromium_profile_dir <span class="s2">&#34;sergio.talens@intelygenz.com&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">OURO_PROFILE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>chromium_profile_dir <span class="s2">&#34;sergio.talens@nxr.global&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">PERSONAL_PROFILE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>chromium_profile_dir <span class="s2">&#34;stalens@gmail.com&#34;</span><span class="si">)</span><span class="s2">&#34;</span>

<span class="c"># Common programs</span>
swaymsg <span class="s2">&#34;exec nextcloud --background&#34;</span>
swaymsg <span class="s2">&#34;exec nm-applet&#34;</span>

<span class="c"># Run spotify on the first workspace (it is mapped to the laptop screen)</span>
swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace 1&#34;</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> <span class="s2">&#34;spotify&#34;</span>

<span class="c"># Run tmux on the</span>
swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace 2&#34;</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> <span class="nt">--</span> foot tmux a <span class="nt">-dt</span> sto

<span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;3&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$OURO_PROFILE_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
  <span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> <span class="nt">-m</span> ouro-browser <span class="nt">--</span> google-chrome <span class="nt">--profile-directory</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$OURO_PROFILE_DIR</span><span class="s2">&#34;</span>
  <span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">$((</span>wp_num+1<span class="k">))</span><span class="s2">&#34;</span>
<span class="k">fi

if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$IGZ_PROFILE_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
  <span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> <span class="nt">-m</span> igz-browser <span class="nt">--</span> google-chrome <span class="nt">--profile-directory</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$IGZ_PROFILE_DIR</span><span class="s2">&#34;</span>
  <span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">$((</span>wp_num+1<span class="k">))</span><span class="s2">&#34;</span>
<span class="k">fi

if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$PERSONAL_PROFILE_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
  <span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> <span class="nt">-m</span> personal-browser <span class="nt">--</span> google-chrome <span class="nt">--profile-directory</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$PERSONAL_PROFILE_DIR</span><span class="s2">&#34;</span>
  <span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">$((</span>wp_num+1<span class="k">))</span><span class="s2">&#34;</span>
<span class="k">fi</span>

<span class="c"># Open the browser without setting the profile directory if none was found</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$wp_num</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;3&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
  <span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> google-chrome
  <span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">$((</span>wp_num+1<span class="k">))</span><span class="s2">&#34;</span>
<span class="k">fi
</span>swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> evolution
<span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">$((</span>wp_num+1<span class="k">))</span><span class="s2">&#34;</span>

swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> slack
<span class="nv">wp_num</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">$((</span>wp_num+1<span class="k">))</span><span class="s2">&#34;</span>

<span class="c"># Open a private browser and a console in the last workspace</span>
swaymsg <span class="nt">-q</span> <span class="s2">&#34;workspace </span><span class="nv">$wp_num</span><span class="s2">&#34;</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> <span class="nt">--</span> google-chrome <span class="nt">--incognito</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> foot

<span class="c"># Go back to the second workspace for keepassxc</span>
swaymsg <span class="s2">&#34;workspace 2&#34;</span>
<span class="k">${</span><span class="nv">I3_TOOLWAIT</span><span class="k">}</span> keepassxc</code></pre></div></li></ul></div></section>
<section class="doc-section level-2"><h3 id="_conclusion">Conclusion</h3><p>After using <strong>Sway</strong> for some days I can confirm that it is a good choice for me, but some of the components needed to
make it work as I want are too new and not available on the <strong>Ubuntu 24.04 LTS</strong> repositories, so I decided to go back
to <strong>Cinnamon</strong> and try <strong>Sway</strong> again in the future, although I added more workspaces to my setup (now they are only
available on the main monitor, the laptop screen is fixed while there is a big monitor connected), added some
additional keyboard shortcuts and installed or updated some applets.</p></section></section>
<section class="doc-section level-1"><h2 id="_text_editor">Text editor</h2><p>When I started using Linux many years ago I used <code>vi</code>/<code>vim</code> and <code>emacs</code> as my text editors (<code>vi</code> for plain text and
<code>emacs</code> for programming and editing HTML/XML), but eventually I moved to <code>vim</code> as my main text editor and I’ve been
using it since (well, I moved to <code>neovim</code> some time ago, although I kept my old <code>vim</code> configuration).</p>
<p>To be fair I’m not as expert as I could be with <code>vim</code>, but I’m productive with it and it has many plugins that make my
life easier on my machines, while keeping my ability to edit text and configurations on any system that has a <code>vi</code>
compatible editor installed.</p>
<p>For work reasons I tried to use <a href="https://code.visualstudio.com/" target="_blank" rel="noopener">Visual Studio Code</a> last year, but I’ve never really
liked it and almost everything I do with it I can do with <code>neovim</code> (i. e. I even use <code>copilot</code> with it). Besides, I’m a
heavy terminal user (I use <code>tmux</code> locally and via <code>ssh</code>) and I like to be able to use my text editor on my shell
sessions, and <code>code</code> does not work like that.</p>
<p>The only annoying thing about <code>vim</code>/<code>neovim</code> is its configuration (well, the problem is that I have a very old one and
probably should spend some time fixing and updating it), but, as I said, it’s been working well for me for a long time,
so I never really had the motivation to do it.</p>
<p>Anyway, after finishing my desktop tests I saw that I had the <a href="https://helix-editor.com/" target="_blank" rel="noopener">Helix</a> editor installed for
some time but I never tried it, so I decided to give it a try and see if it could be a good replacement for <code>neovim</code> on
my environments (the only drawback is that as it is not <code>vi</code> compatible, I would need to switch back to <code>vi</code> mode when
working on remote systems, but I guess I could live with that).</p>
<p>I ran the <code>helix</code> tutorial and I liked it, so I decided to configure and install the
<a href="https://github.com/helix-editor/helix/wiki/Language-Server-Configurations" target="_blank" rel="noopener">Language Servers</a> I can probably take
advantage of on my daily work on my personal and work machines and see how it works.</p>
<section class="doc-section level-2"><h3 id="_language_server_installations">Language server installations</h3><p>A lot of manual installations are needed to get the language servers working what I did on my machines is more or less
the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">#</span><span class="w"> </span>AWK
<span class="gp">sudo npm i -g &#39;awk-language-server@&gt;</span><span class="o">=</span>0.5.2<span class="s1">&#39;
</span><span class="gp">#</span><span class="w"> </span><span class="s1">BASH
</span><span class="go">sudo apt-get install shellcheck shfmt
sudo npm i -g bash-language-server
</span><span class="gp">#</span><span class="w"> </span>C/C++
<span class="go">sudo apt-get install clangd
</span><span class="gp">#</span><span class="w"> </span>CSS, HTML, ESLint, JSON, SCS
<span class="go">sudo npm i -g vscode-langservers-extracted
</span><span class="gp">#</span><span class="w"> </span>Docker
<span class="go">sudo npm install -g dockerfile-language-server-nodejs
</span><span class="gp">#</span><span class="w"> </span>Docker compose
<span class="go">sudo npm install -g @microsoft/compose-language-service
</span><span class="gp">#</span><span class="w"> </span>Helm
<span class="go">app=&#34;helm_ls_linux_amd64&#34;
</span><span class="gp">url=&#34;$</span><span class="o">(</span>
<span class="go">  curl -s https://api.github.com/repos/mrjosh/helm-ls/releases/latest |
</span><span class="gp">    jq -r &#34;.assets[] | select(.name == \&#34;$</span>app<span class="se">\&#34;</span><span class="o">)</span> | .browser_download_url<span class="s2">&#34;
</span><span class="go">)&#34;
</span><span class="gp">curl -L &#34;$</span>url<span class="s2">&#34; --output /tmp/helm_ls
</span><span class="go">sudo install /tmp/helm_ls /usr/local/bin
rm /tmp/helm_ls
</span><span class="gp">#</span><span class="w"> </span>Markdown
<span class="go">app=&#34;marksman-linux-x64&#34;
</span><span class="gp">url=&#34;$</span><span class="o">(</span>
<span class="go">  curl -s https://api.github.com/repos/artempyanykh/marksman/releases/latest |
</span><span class="gp">    jq -r &#34;.assets[] | select(.name == \&#34;$</span>app<span class="se">\&#34;</span><span class="o">)</span> | .browser_download_url<span class="s2">&#34;
</span><span class="go">)&#34;
</span><span class="gp">curl -L &#34;$</span>url<span class="s2">&#34; --output /tmp/marksman
</span><span class="go">sudo install /tmp/marksman /usr/local/bin
rm /tmp/marksman
</span><span class="gp">#</span><span class="w"> </span>Python
<span class="go">sudo npm i -g pyright
</span><span class="gp">#</span><span class="w"> </span>Rust
<span class="go">rustup component add rust-analyzer
</span><span class="gp">#</span><span class="w"> </span>SQL
<span class="go">sudo npm i -g sql-language-server
</span><span class="gp">#</span><span class="w"> </span>Terraform
<span class="go">sudo apt-get install terraform-ls
</span><span class="gp">#</span><span class="w"> </span>TOML
<span class="go">cargo install taplo-cli --locked --features lsp
</span><span class="gp">#</span><span class="w"> </span>YAML
<span class="go">sudo npm install --global yaml-language-server
</span><span class="gp">#</span><span class="w"> </span>JavaScript, TypeScript
<span class="go">sudo npm install -g typescript-language-server typescript
sudo npm install -g --save-dev --save-exact @biomejs/biome</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_helix_configuration">Helix configuration</h3><p>The <code>helix</code> configuration is done on a couple of <code>toml</code> files that are placed on the <code>~/.config/helix</code> directory,
the <code>config.toml</code> file I used is this one:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="toml"><span class="py">theme</span> <span class="p">=</span> <span class="s">&#34;solarized_light&#34;</span>

<span class="nn">[editor]</span>
<span class="py">line-number</span> <span class="p">=</span> <span class="s">&#34;relative&#34;</span>
<span class="py">mouse</span> <span class="p">=</span> <span class="kc">false</span>

<span class="nn">[editor.statusline]</span>
<span class="py">left</span> <span class="p">=</span> <span class="p">[</span><span class="s">&#34;mode&#34;</span><span class="p">,</span> <span class="s">&#34;spinner&#34;</span><span class="p">]</span>
<span class="py">center</span> <span class="p">=</span> <span class="p">[</span><span class="s">&#34;file-name&#34;</span><span class="p">]</span>
<span class="py">right</span> <span class="p">=</span> <span class="p">[</span><span class="s">&#34;diagnostics&#34;</span><span class="p">,</span> <span class="s">&#34;selections&#34;</span><span class="p">,</span> <span class="s">&#34;position&#34;</span><span class="p">,</span> <span class="s">&#34;file-encoding&#34;</span><span class="p">,</span> <span class="s">&#34;file-line-ending&#34;</span><span class="p">,</span> <span class="s">&#34;file-type&#34;</span><span class="p">]</span>
<span class="py">separator</span> <span class="p">=</span> <span class="s">&#34;│&#34;</span>
<span class="py">mode.normal</span> <span class="p">=</span> <span class="s">&#34;NORMAL&#34;</span>
<span class="py">mode.insert</span> <span class="p">=</span> <span class="s">&#34;INSERT&#34;</span>
<span class="py">mode.select</span> <span class="p">=</span> <span class="s">&#34;SELECT&#34;</span>

<span class="nn">[editor.cursor-shape]</span>
<span class="py">insert</span> <span class="p">=</span> <span class="s">&#34;bar&#34;</span>
<span class="py">normal</span> <span class="p">=</span> <span class="s">&#34;block&#34;</span>
<span class="py">select</span> <span class="p">=</span> <span class="s">&#34;underline&#34;</span>

<span class="nn">[editor.file-picker]</span>
<span class="py">hidden</span> <span class="p">=</span> <span class="kc">false</span>

<span class="nn">[editor.whitespace]</span>
<span class="py">render</span> <span class="p">=</span> <span class="s">&#34;all&#34;</span>

<span class="nn">[editor.indent-guides]</span>
<span class="py">render</span> <span class="p">=</span> <span class="kc">true</span>
<span class="py">character</span> <span class="p">=</span> <span class="s">&#34;╎&#34;</span> <span class="c"># Some characters that work well: &#34;▏&#34;, &#34;┆&#34;, &#34;┊&#34;, &#34;⸽&#34;</span>
<span class="py">skip-levels</span> <span class="p">=</span> <span class="mi">1</span></code></pre></div>
<p>And to configure the language servers I used the following <code>language-servers.toml</code> file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="toml"><span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;go&#34;</span>
<span class="py">auto-format</span> <span class="p">=</span> <span class="kc">true</span>
<span class="py">formatter</span> <span class="o">=</span> <span class="p">{</span> <span class="py">command</span> <span class="p">=</span> <span class="s">&#34;goimports&#34;</span> <span class="p">}</span>

<span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;javascript&#34;</span>
<span class="py">language-servers</span> <span class="p">=</span> <span class="p">[</span>
  <span class="s">&#34;typescript-language-server&#34;</span><span class="p">,</span> <span class="c"># optional</span>
  <span class="s">&#34;vscode-eslint-language-server&#34;</span><span class="p">,</span>
<span class="p">]</span>

<span class="nn">[language-server.rust-analyzer.config.check]</span>
<span class="py">command</span> <span class="p">=</span> <span class="s">&#34;clippy&#34;</span>

<span class="nn">[language-server.sql-language-server]</span>
<span class="py">command</span> <span class="p">=</span> <span class="s">&#34;sql-language-server&#34;</span>
<span class="py">args</span> <span class="p">=</span> <span class="p">[</span><span class="s">&#34;up&#34;</span><span class="p">,</span> <span class="s">&#34;--method&#34;</span><span class="p">,</span> <span class="s">&#34;stdio&#34;</span><span class="p">]</span>

<span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;sql&#34;</span>
<span class="py">language-servers</span> <span class="p">=</span> <span class="p">[</span> <span class="s">&#34;sql-language-server&#34;</span> <span class="p">]</span>

<span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;hcl&#34;</span>
<span class="py">language-servers</span> <span class="p">=</span> <span class="p">[</span> <span class="s">&#34;terraform-ls&#34;</span> <span class="p">]</span>
<span class="py">language-id</span> <span class="p">=</span> <span class="s">&#34;terraform&#34;</span>

<span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;tfvars&#34;</span>
<span class="py">language-servers</span> <span class="p">=</span> <span class="p">[</span> <span class="s">&#34;terraform-ls&#34;</span> <span class="p">]</span>
<span class="py">language-id</span> <span class="p">=</span> <span class="s">&#34;terraform-vars&#34;</span>

<span class="nn">[language-server.terraform-ls]</span>
<span class="py">command</span> <span class="p">=</span> <span class="s">&#34;terraform-ls&#34;</span>
<span class="py">args</span> <span class="p">=</span> <span class="p">[</span><span class="s">&#34;serve&#34;</span><span class="p">]</span>

<span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;toml&#34;</span>
<span class="py">formatter</span> <span class="o">=</span> <span class="p">{</span> <span class="py">command</span> <span class="p">=</span> <span class="s">&#34;taplo&#34;</span><span class="p">,</span> <span class="py">args</span> <span class="p">=</span> <span class="p">[</span><span class="s">&#34;fmt&#34;</span><span class="p">,</span> <span class="s">&#34;-&#34;</span><span class="p">]</span> <span class="p">}</span>

<span class="nn">[[language]]</span>
<span class="py">name</span> <span class="p">=</span> <span class="s">&#34;typescript&#34;</span>
<span class="py">language-servers</span> <span class="p">=</span> <span class="p">[</span>
  <span class="s">&#34;typescript-language-server&#34;</span><span class="p">,</span>
  <span class="s">&#34;vscode-eslint-language-server&#34;</span><span class="p">,</span>
<span class="p">]</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_neovim_configuration">Neovim configuration</h3><p>After a little while I noticed that I was going to need some time to get used to <code>helix</code> and the most interesting thing
for me was the easy configuration and the language server integrations, but as I am already comfortable with <code>neovim</code>
and just had installed the language server support tools on my machines I just need to
<a href="https://github.com/neovim/nvim-lspconfig" target="_blank" rel="noopener">configure them for neovim</a> and I can keep using it for a while.</p>
<p>As I said my configuration is old, to configure <code>neovim</code> I have the following <code>init.vim</code> file on my <code>~/.config/nvim</code>
folder:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="vim"><span class="k">set</span> <span class="nb">runtimepath</span>^<span class="p">=~</span><span class="sr">/.vim runtimepath+=~/</span><span class="p">.</span><span class="k">vim</span>/after
<span class="k">let</span> &amp;<span class="nb">packpath</span><span class="p">=</span>&amp;<span class="nb">runtimepath</span>
<span class="k">source</span> <span class="p">~</span><span class="sr">/.vim/</span>vimrc
<span class="c">&#34; load lua configuration</span>
<span class="k">lua</span> require<span class="p">(</span><span class="s1">&#39;config&#39;</span><span class="p">)</span></code></pre></div>
<p>With that configuration I keep my old <code>vimrc</code> (it is a little bit messy, but it works) and I use a <code>lua</code> configuration
file for the language servers and some additional <code>neovim</code> plugins on the <code>~/.config/nvim/lua/config.lua</code> file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="lua"><span class="c1">-- -----------------------</span>
<span class="c1">-- BEG: LSP Configurations</span>
<span class="c1">-- -----------------------</span>
<span class="c1">-- AWS (awk_ls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">awk_ls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Bash (bashls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">bashls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- C/C++ (clangd)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">clangd</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- CSS (cssls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">cssls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Docker (dockerls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">dockerls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Docker Compose</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">docker_compose_language_service</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Golang (gopls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">gopls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Helm (helm_ls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">helm_ls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Markdown</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">marksman</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Python (pyright)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">pyright</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Rust (rust-analyzer)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">rust_analyzer</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- SQL (sqlls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">sqlls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Terraform (terraformls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">terraformls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- TOML (taplo)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">taplo</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- Typescript (ts_ls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">ts_ls</span><span class="p">.</span><span class="n">setup</span><span class="p">{}</span>
<span class="c1">-- YAML (yamlls)</span>
<span class="nb">require</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">.</span><span class="n">yamlls</span><span class="p">.</span><span class="n">setup</span><span class="p">{</span>
  <span class="n">settings</span> <span class="o">=</span> <span class="p">{</span>
    <span class="n">yaml</span> <span class="o">=</span> <span class="p">{</span>
      <span class="n">customTags</span> <span class="o">=</span> <span class="p">{</span> <span class="s2">&#34;!reference sequence&#34;</span> <span class="p">}</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">}</span>
<span class="c1">-- -----------------------</span>
<span class="c1">-- END: LSP Configurations</span>
<span class="c1">-- -----------------------</span>
<span class="c1">-- ---------------------------------</span>
<span class="c1">-- BEG: Autocompletion configuration</span>
<span class="c1">-- ---------------------------------</span>
<span class="c1">-- Ref: https://github.com/neovim/nvim-lspconfig/wiki/Autocompletion</span>
<span class="c1">--</span>
<span class="c1">-- Pre requisites:</span>
<span class="c1">--</span>
<span class="c1">--   # Packer</span>
<span class="c1">--   git clone --depth 1 https://github.com/wbthomason/packer.nvim \</span>
<span class="c1">--      ~/.local/share/nvim/site/pack/packer/start/packer.nvim</span>
<span class="c1">--</span>
<span class="c1">--   # Start nvim and run :PackerSync or :PackerUpdate</span>
<span class="c1">-- ---------------------------------</span>
<span class="kd">local</span> <span class="n">use</span> <span class="o">=</span> <span class="nb">require</span><span class="p">(</span><span class="s1">&#39;packer&#39;</span><span class="p">).</span><span class="n">use</span>
<span class="nb">require</span><span class="p">(</span><span class="s1">&#39;packer&#39;</span><span class="p">).</span><span class="n">startup</span><span class="p">(</span><span class="k">function</span><span class="p">()</span>
  <span class="n">use</span> <span class="s1">&#39;wbthomason/packer.nvim&#39;</span> <span class="c1">-- Packer, useful to avoid removing it with PackerSync / PackerUpdate</span>
  <span class="n">use</span> <span class="s1">&#39;neovim/nvim-lspconfig&#39;</span> <span class="c1">-- Collection of configurations for built-in LSP client</span>
  <span class="n">use</span> <span class="s1">&#39;hrsh7th/nvim-cmp&#39;</span> <span class="c1">-- Autocompletion plugin</span>
  <span class="n">use</span> <span class="s1">&#39;hrsh7th/cmp-nvim-lsp&#39;</span> <span class="c1">-- LSP source for nvim-cmp</span>
  <span class="n">use</span> <span class="s1">&#39;saadparwaiz1/cmp_luasnip&#39;</span> <span class="c1">-- Snippets source for nvim-cmp</span>
  <span class="n">use</span> <span class="s1">&#39;L3MON4D3/LuaSnip&#39;</span> <span class="c1">-- Snippets plugin</span>
<span class="k">end</span><span class="p">)</span>
<span class="c1">-- Add additional capabilities supported by nvim-cmp</span>
<span class="kd">local</span> <span class="n">capabilities</span> <span class="o">=</span> <span class="nb">require</span><span class="p">(</span><span class="s2">&#34;cmp_nvim_lsp&#34;</span><span class="p">).</span><span class="n">default_capabilities</span><span class="p">()</span>
<span class="kd">local</span> <span class="n">lspconfig</span> <span class="o">=</span> <span class="nb">require</span><span class="p">(</span><span class="s1">&#39;lspconfig&#39;</span><span class="p">)</span>
<span class="c1">-- Enable some language servers with the additional completion capabilities offered by nvim-cmp</span>
<span class="kd">local</span> <span class="n">servers</span> <span class="o">=</span> <span class="p">{</span> <span class="s1">&#39;clangd&#39;</span><span class="p">,</span> <span class="s1">&#39;rust_analyzer&#39;</span><span class="p">,</span> <span class="s1">&#39;pyright&#39;</span><span class="p">,</span> <span class="s1">&#39;ts_ls&#39;</span> <span class="p">}</span>
<span class="k">for</span> <span class="n">_</span><span class="p">,</span> <span class="n">lsp</span> <span class="k">in</span> <span class="nb">ipairs</span><span class="p">(</span><span class="n">servers</span><span class="p">)</span> <span class="k">do</span>
  <span class="n">lspconfig</span><span class="p">[</span><span class="n">lsp</span><span class="p">].</span><span class="n">setup</span> <span class="p">{</span>
    <span class="c1">-- on_attach = my_custom_on_attach,</span>
    <span class="n">capabilities</span> <span class="o">=</span> <span class="n">capabilities</span><span class="p">,</span>
  <span class="p">}</span>
<span class="k">end</span>
<span class="c1">-- luasnip setup</span>
<span class="kd">local</span> <span class="n">luasnip</span> <span class="o">=</span> <span class="nb">require</span> <span class="s1">&#39;luasnip&#39;</span>
<span class="c1">-- nvim-cmp setup</span>
<span class="kd">local</span> <span class="n">cmp</span> <span class="o">=</span> <span class="nb">require</span> <span class="s1">&#39;cmp&#39;</span>
<span class="n">cmp</span><span class="p">.</span><span class="n">setup</span> <span class="p">{</span>
  <span class="n">snippet</span> <span class="o">=</span> <span class="p">{</span>
    <span class="n">expand</span> <span class="o">=</span> <span class="k">function</span><span class="p">(</span><span class="n">args</span><span class="p">)</span>
      <span class="n">luasnip</span><span class="p">.</span><span class="n">lsp_expand</span><span class="p">(</span><span class="n">args</span><span class="p">.</span><span class="n">body</span><span class="p">)</span>
    <span class="k">end</span><span class="p">,</span>
  <span class="p">},</span>
  <span class="n">mapping</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">.</span><span class="n">preset</span><span class="p">.</span><span class="n">insert</span><span class="p">({</span>
    <span class="p">[</span><span class="s1">&#39;&lt;C-u&gt;&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">.</span><span class="n">scroll_docs</span><span class="p">(</span><span class="o">-</span><span class="mi">4</span><span class="p">),</span> <span class="c1">-- Up</span>
    <span class="p">[</span><span class="s1">&#39;&lt;C-d&gt;&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">.</span><span class="n">scroll_docs</span><span class="p">(</span><span class="mi">4</span><span class="p">),</span> <span class="c1">-- Down</span>
    <span class="c1">-- C-b (back) C-f (forward) for snippet placeholder navigation.</span>
    <span class="p">[</span><span class="s1">&#39;&lt;C-Space&gt;&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">.</span><span class="n">complete</span><span class="p">(),</span>
    <span class="p">[</span><span class="s1">&#39;&lt;CR&gt;&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">.</span><span class="n">confirm</span> <span class="p">{</span>
      <span class="n">behavior</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">ConfirmBehavior</span><span class="p">.</span><span class="n">Replace</span><span class="p">,</span>
      <span class="nb">select</span> <span class="o">=</span> <span class="kc">true</span><span class="p">,</span>
    <span class="p">},</span>
    <span class="p">[</span><span class="s1">&#39;&lt;Tab&gt;&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">(</span><span class="k">function</span><span class="p">(</span><span class="n">fallback</span><span class="p">)</span>
      <span class="k">if</span> <span class="n">cmp</span><span class="p">.</span><span class="n">visible</span><span class="p">()</span> <span class="k">then</span>
        <span class="n">cmp</span><span class="p">.</span><span class="n">select_next_item</span><span class="p">()</span>
      <span class="k">elseif</span> <span class="n">luasnip</span><span class="p">.</span><span class="n">expand_or_jumpable</span><span class="p">()</span> <span class="k">then</span>
        <span class="n">luasnip</span><span class="p">.</span><span class="n">expand_or_jump</span><span class="p">()</span>
      <span class="k">else</span>
        <span class="n">fallback</span><span class="p">()</span>
      <span class="k">end</span>
    <span class="k">end</span><span class="p">,</span> <span class="p">{</span> <span class="s1">&#39;i&#39;</span><span class="p">,</span> <span class="s1">&#39;s&#39;</span> <span class="p">}),</span>
    <span class="p">[</span><span class="s1">&#39;&lt;S-Tab&gt;&#39;</span><span class="p">]</span> <span class="o">=</span> <span class="n">cmp</span><span class="p">.</span><span class="n">mapping</span><span class="p">(</span><span class="k">function</span><span class="p">(</span><span class="n">fallback</span><span class="p">)</span>
      <span class="k">if</span> <span class="n">cmp</span><span class="p">.</span><span class="n">visible</span><span class="p">()</span> <span class="k">then</span>
        <span class="n">cmp</span><span class="p">.</span><span class="n">select_prev_item</span><span class="p">()</span>
      <span class="k">elseif</span> <span class="n">luasnip</span><span class="p">.</span><span class="n">jumpable</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="k">then</span>
        <span class="n">luasnip</span><span class="p">.</span><span class="n">jump</span><span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span>
      <span class="k">else</span>
        <span class="n">fallback</span><span class="p">()</span>
      <span class="k">end</span>
    <span class="k">end</span><span class="p">,</span> <span class="p">{</span> <span class="s1">&#39;i&#39;</span><span class="p">,</span> <span class="s1">&#39;s&#39;</span> <span class="p">}),</span>
  <span class="p">}),</span>
  <span class="n">sources</span> <span class="o">=</span> <span class="p">{</span>
    <span class="p">{</span> <span class="n">name</span> <span class="o">=</span> <span class="s1">&#39;nvim_lsp&#39;</span> <span class="p">},</span>
    <span class="p">{</span> <span class="n">name</span> <span class="o">=</span> <span class="s1">&#39;luasnip&#39;</span> <span class="p">},</span>
  <span class="p">},</span>
<span class="p">}</span>
<span class="c1">-- ---------------------------------</span>
<span class="c1">-- END: Autocompletion configuration</span>
<span class="c1">-- ---------------------------------</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_conclusion_2">Conclusion</h3><p>I guess I’ll keep <code>helix</code> installed and try it again on some of my personal projects to see if I can get used to it,
but for now I’ll stay with <code>neovim</code> as my main text editor and learn the shortcuts to use it with the language servers.</p></section></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>GitLab CI/CD Tips: Automatic Versioning Using semantic-release</title>
      <link>https://blogops.mixinet.net/posts/gitlab-ci/semantic-release/</link>
      <pubDate>Tue, 26 Dec 2023 01:30:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/gitlab-ci/semantic-release/</guid>
      <description>&lt;p&gt;This post describes how I’m using
&lt;a href=&#34;https://semantic-release.gitbook.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;semantic-release&lt;/a&gt; on
&lt;a href=&#34;https://docs.gitlab.com/ee/ci/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;gitlab-ci&lt;/a&gt; to manage versioning automatically
for different kinds of projects following a simple workflow (a &lt;code&gt;develop&lt;/code&gt; branch
where changes are added or merged to test new versions, a temporary
&lt;code&gt;release/#.#.#&lt;/code&gt; to generate the release candidate versions and a &lt;code&gt;main&lt;/code&gt; branch
where the final versions are published).&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_what_is_semantic_release&#34;&gt;What is &lt;code&gt;semantic-release&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;It is a &lt;a href=&#34;https://nodejs.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Node.js&lt;/a&gt; application designed to manage project
versioning information on &lt;a href=&#34;https://git-scm.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Git Repositories&lt;/a&gt; using a
&lt;a href=&#34;https://en.wikipedia.org/wiki/Continuous_integration&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Continuous integration&lt;/a&gt;
system (in this post we will use &lt;code&gt;gitlab-ci&lt;/code&gt;)&lt;/p&gt;&lt;/section&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_how_does_it_work&#34;&gt;How does it work&lt;/h2&gt;&lt;p&gt;By default &lt;code&gt;semantic-release&lt;/code&gt; uses &lt;a href=&#34;https://semver.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;semver&lt;/a&gt; for versioning
(release versions use the format &lt;code&gt;MAJOR.MINOR.PATCH&lt;/code&gt;) and commit messages are
parsed to determine the next version number to publish.&lt;/p&gt;
&lt;p&gt;If after analyzing the commits the version number has to be changed, the command
updates the files we tell it to (i.e. the &lt;code&gt;package.json&lt;/code&gt; file for &lt;code&gt;nodejs&lt;/code&gt;
projects and possibly a &lt;code&gt;CHANGELOG.md&lt;/code&gt; file), creates a new commit with the
changed files, creates a tag with the new version and pushes the changes to the
repository.&lt;/p&gt;
&lt;p&gt;When running on a CI/CD system we usually generate the artifacts related to a
release (a package, a container image, etc.) from the tag, as it includes the
right version number and usually has passed all the required tests (it is a good
idea to run the tests again in any case, as someone could create a tag manually
or we could run extra jobs when building the final assets …​ if they fail it is
not a big issue anyway, numbers are cheap and infinite, so we can skip releases
if needed).&lt;/p&gt;
&lt;section class=&#34;doc-section level-2&#34;&gt;&lt;h3 id=&#34;_commit_messages_and_versioning&#34;&gt;Commit messages and versioning&lt;/h3&gt;&lt;p&gt;The commit messages must follow a known format, the default module used to
&lt;a href=&#34;https://github.com/semantic-release/commit-analyzer&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;analyze&lt;/a&gt; them uses the
&lt;a href=&#34;https://github.com/angular/angular.js/blob/master/DEVELOPERS.md#-git-commit-guidelines&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;angular&lt;/a&gt;
git commit guidelines, but I prefer the
&lt;a href=&#34;https://www.conventionalcommits.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;conventional commits&lt;/a&gt; one, mainly because
it’s a lot easier to use when you want to update the &lt;code&gt;MAJOR&lt;/code&gt; version.&lt;/p&gt;
&lt;p&gt;The commit message format used must be:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code&gt;&amp;lt;type&amp;gt;(optional scope): &amp;lt;description&amp;gt;

[optional body]

[optional footer(s)]&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>This post describes how I’m using
<a href="https://semantic-release.gitbook.io/" target="_blank" rel="noopener">semantic-release</a> on
<a href="https://docs.gitlab.com/ee/ci/" target="_blank" rel="noopener">gitlab-ci</a> to manage versioning automatically
for different kinds of projects following a simple workflow (a <code>develop</code> branch
where changes are added or merged to test new versions, a temporary
<code>release/#.#.#</code> to generate the release candidate versions and a <code>main</code> branch
where the final versions are published).</p>
<section class="doc-section level-1"><h2 id="_what_is_semantic_release">What is <code>semantic-release</code></h2><p>It is a <a href="https://nodejs.org/" target="_blank" rel="noopener">Node.js</a> application designed to manage project
versioning information on <a href="https://git-scm.com/" target="_blank" rel="noopener">Git Repositories</a> using a
<a href="https://en.wikipedia.org/wiki/Continuous_integration" target="_blank" rel="noopener">Continuous integration</a>
system (in this post we will use <code>gitlab-ci</code>)</p></section>
<section class="doc-section level-1"><h2 id="_how_does_it_work">How does it work</h2><p>By default <code>semantic-release</code> uses <a href="https://semver.org/" target="_blank" rel="noopener">semver</a> for versioning
(release versions use the format <code>MAJOR.MINOR.PATCH</code>) and commit messages are
parsed to determine the next version number to publish.</p>
<p>If after analyzing the commits the version number has to be changed, the command
updates the files we tell it to (i.e. the <code>package.json</code> file for <code>nodejs</code>
projects and possibly a <code>CHANGELOG.md</code> file), creates a new commit with the
changed files, creates a tag with the new version and pushes the changes to the
repository.</p>
<p>When running on a CI/CD system we usually generate the artifacts related to a
release (a package, a container image, etc.) from the tag, as it includes the
right version number and usually has passed all the required tests (it is a good
idea to run the tests again in any case, as someone could create a tag manually
or we could run extra jobs when building the final assets …​ if they fail it is
not a big issue anyway, numbers are cheap and infinite, so we can skip releases
if needed).</p>
<section class="doc-section level-2"><h3 id="_commit_messages_and_versioning">Commit messages and versioning</h3><p>The commit messages must follow a known format, the default module used to
<a href="https://github.com/semantic-release/commit-analyzer" target="_blank" rel="noopener">analyze</a> them uses the
<a href="https://github.com/angular/angular.js/blob/master/DEVELOPERS.md#-git-commit-guidelines" target="_blank" rel="noopener">angular</a>
git commit guidelines, but I prefer the
<a href="https://www.conventionalcommits.org/" target="_blank" rel="noopener">conventional commits</a> one, mainly because
it’s a lot easier to use when you want to update the <code>MAJOR</code> version.</p>
<p>The commit message format used must be:</p>
<div class="listing-block"><pre class="rouge highlight"><code>&lt;type&gt;(optional scope): &lt;description&gt;

[optional body]

[optional footer(s)]</code></pre></div>
<p>The system supports three types of branches: <code>release</code>, <code>maintenance</code> and
<code>pre-release</code>, but for now I’m not using <code>maintenance</code> ones.</p>
<p>The branches I use and their types are:</p>
<div class="ulist"><ul><li><code>main</code> as <em>release branch</em> (final versions are published from there)</li><li><code>develop</code> as <em>pre release branch</em> (used to publish development and testing
versions with the format <code>#.#.#-SNAPSHOT.#</code>)</li><li><code>release/#.#.#</code> as <em>pre release branches</em> (they are created from <code>develop</code>
to publish release candidate versions with the format <code>#.#.#-rc.#</code> and once
they are merged with <code>main</code> they are deleted)</li></ul></div>
<p>On the <em>release branch</em> (<code>main</code>) the version number is updated as follows:</p>
<div class="olist arabic"><ol class="arabic"><li>The <code>MAJOR</code> number is incremented if a commit with a <code>BREAKING CHANGE:</code>
footer or an exclamation (<code>!</code>) after the type/scope is found in the list of
commits found since the last version change (it looks for tags on the same
branch).</li><li>The <code>MINOR</code> number is incremented if the MAJOR number is not going to be
changed and there is a commit with type <code>feat</code> in the  commits found since
the last version change.</li><li>The <code>PATCH</code> number is incremented if neither the MAJOR nor the MINOR numbers
are going to be changed and there is a commit with type <code>fix</code> in the the
commits found since the last version change.</li></ol></div>
<p>On the <em>pre release branches</em> (<code>develop</code> and <code>release/#.#.#</code>) the version and
pre release numbers are always calculated from the last published version
available on the branch (i. e. if we published version <code>1.3.2</code> on <code>main</code> we need
to have the commit with that tag on the <code>develop</code> or <code>release/#.#.#</code> branch
to get right what will be the next version).</p>
<p>The version number is updated as follows:</p>
<div class="olist arabic"><ol class="arabic"><li><p>The <code>MAJOR</code> number is incremented if a commit with a <code>BREAKING CHANGE:</code>
footer or an exclamation (<code>!</code>) after the type/scope is found in the list of
commits found since the last released version.</p><p>In our example it was <code>1.3.2</code> and the version is updated to <code>2.0.0-SNAPSHOT.1</code>
or <code>2.0.0-rc.1</code> depending on the branch.</p></li><li><p>The <code>MINOR</code> number is incremented if the MAJOR number is not going to be
changed and there is a commit with type <code>feat</code> in the  commits found since
the last released version.</p><p>In our example the release was <code>1.3.2</code> and the version is updated to
<code>1.4.0-SNAPSHOT.1</code> or <code>1.4.0-rc.1</code> depending on the branch.</p></li><li><p>The <code>PATCH</code> number is incremented if neither the MAJOR nor the MINOR numbers
are going to be changed and there is a commit with type <code>fix</code> in the the
commits found since the last version change.</p><p>In our example the release was <code>1.3.2</code> and the version is updated to
<code>1.3.3-SNAPSHOT.1</code> or <code>1.3.3-rc.1</code> depending on the branch.</p></li><li>The pre release number is incremented if the <code>MAJOR</code>, <code>MINOR</code> and <code>PATCH</code>
numbers are not going to be changed but there is a commit that would
otherwise update the version (i.e. a <code>fix</code> on <code>1.3.3-SNAPSHOT.1</code> will set the
version to <code>1.3.3-SNAPSHOT.2</code>, a <code>fix</code> or <code>feat</code> on <code>1.4.0-rc.1</code> will set the
version to <code>1.4.0-rc.2</code> an so on).</li></ol></div></section></section>
<section class="doc-section level-1"><h2 id="_how_do_we_manage_its_configuration">How do we manage its configuration</h2><p>Although the system is designed to work with <code>nodejs</code> projects, it can be used
with multiple programming languages and project types.</p>
<p>For <code>nodejs</code> projects the usual place to put the configuration is the project’s
<code>package.json</code>, but I prefer to use the <code>.releaserc</code> file instead.</p>
<p>As I use a common set of CI templates, instead of using a <code>.releaserc</code> on each
project I generate it on the fly on the jobs that need it, replacing values
related to the project type and the current branch on a template using the
<a href="https://github.com/krakozaure/tmpl" target="_blank" rel="noopener">tmpl</a> command (lately I use a
<a href="https://github.com/sto/tmpl/tree/v0.4.0-fix" target="_blank" rel="noopener">branch of my own fork</a> while I wait
for some feedback from upstream, as you will see on the <code>Dockerfile</code>).</p></section>
<section class="doc-section level-1"><h2 id="_container_used_to_run_it">Container used to run it</h2><p>As we run the command on a <code>gitlab-ci</code> job we use the image built from the
following <code>Dockerfile</code>:</p>
<figure class="listing-block"><figcaption>Dockerfile</figcaption>
<pre class="rouge highlight"><code data-lang="docker"><span class="c"># Semantic release image</span>
<span class="k">FROM</span><span class="w"> </span><span class="s">golang:alpine</span><span class="w"> </span><span class="k">AS</span><span class="w"> </span><span class="s">tmpl-builder</span>
<span class="c">#RUN go install github.com/krakozaure/tmpl@v0.4.0</span>
<span class="k">RUN </span>go <span class="nb">install </span>github.com/sto/tmpl@v0.4.0-sto.2

<span class="k">FROM</span><span class="s"> node:lts-alpine</span>
<span class="k">COPY</span><span class="s"> --from=tmpl-builder /go/bin/tmpl /usr/local/bin/tmpl</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span>  apk upgrade <span class="o">&amp;&amp;</span><span class="se">\
</span>  apk add curl git jq openssh-keygen yq zip <span class="o">&amp;&amp;</span><span class="se">\
</span>  npm <span class="nb">install</span> <span class="nt">--location</span><span class="o">=</span>global<span class="se">\
</span>    conventional-changelog-conventionalcommits@6.1.0<span class="se">\
</span>    @qiwi/multi-semantic-release@7.0.0<span class="se">\
</span>    semantic-release@21.0.7<span class="se">\
</span>    @semantic-release/changelog@6.0.3<span class="se">\
</span>    semantic-release-export-data@1.0.1<span class="se">\
</span>    @semantic-release/git@10.0.1<span class="se">\
</span>    @semantic-release/gitlab@9.5.1<span class="se">\
</span>    @semantic-release/release-notes-generator@11.0.4<span class="se">\
</span>    semantic-release-replace-plugin@1.2.7<span class="se">\
</span>    semver@7.5.4<span class="se">\
</span>  <span class="o">&amp;&amp;</span><span class="se">\
</span>  <span class="nb">rm</span> <span class="nt">-rf</span> /var/cache/apk/<span class="k">*</span>
<span class="k">CMD</span><span class="s"> [&#34;/bin/sh&#34;]</span></code></pre></figure>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The versions of some of the components are not the latest ones, I try to review
them from time to time but you know the saying: <em>if it ain’t broken, don’t fix
it</em>.</p></aside>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The image includes some tools and modules I use on other projects like the
<code>qiwi/multi-semantic-release</code>, a fork of <code>semantic-release</code> that allows to
publish multiple packages from a single repository (<em>monorepo</em>).</p>
<p>I’ll probably write about how I use it on a future post.</p></aside></section>
<section class="doc-section level-1"><h2 id="_how_and_when_is_it_executed">How and when is it executed</h2><p>The job that runs <code>semantic-release</code> is executed when new commits are added
to the <code>develop</code>, <code>release/#.#.#</code> or <code>main</code> branches (basically when something
is merged or pushed) and after all tests have passed (we don’t want to create a
new version that does not compile or passes at least the unit tests).</p>
<p>The job is something like the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">semantic_release</span><span class="pi">:</span>
  <span class="na">image</span><span class="pi">:</span> <span class="s">$SEMANTIC_RELEASE_IMAGE</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">$CI_COMMIT_BRANCH</span><span class="nv"> </span><span class="s">=~</span><span class="nv"> </span><span class="s">/^(develop|main|release\/\d+.\d+.\d+)$/&#39;</span>
      <span class="na">when</span><span class="pi">:</span> <span class="s">always</span>
  <span class="na">stage</span><span class="pi">:</span> <span class="s">release</span>
  <span class="na">before_script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">echo &#34;Loading scripts.sh&#34;</span>
    <span class="pi">-</span> <span class="s">. $ASSETS_DIR/scripts.sh</span>
  <span class="na">script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">sr_gen_releaserc_json</span>
    <span class="pi">-</span> <span class="s">git_push_setup</span>
    <span class="pi">-</span> <span class="s">semantic-release</span></code></pre></div>
<p>Where the <code>SEMANTIC_RELEASE_IMAGE</code> variable contains the URI of the image built
using the <code>Dockerfile</code> above and the <code>sr_gen_releaserc_json</code> and
<code>git_push_setup</code> are functions defined on the <code>$ASSETS_DIR/scripts.sh</code> file:</p>
<div class="ulist"><ul><li>The <code>sr_gen_releaserc_json</code> function generates the <code>.releaserc.json</code> file
using the <code>tmpl</code> command.</li><li>The <code>git_push_setup</code> function configures <code>git</code> to allow pushing changes to the
repository with the <code>semantic-release</code> command, optionally signing them with a
SSH key.</li></ul></div>
<section class="doc-section level-2"><h3 id="_the_sr_gen_releaserc_json_function">The <code>sr_gen_releaserc_json</code> function</h3><p>The code for the <code>sr_gen_releaserc_json</code> function is the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="bash">sr_gen_releaserc_json<span class="o">()</span>
<span class="o">{</span>
  <span class="c"># Use nodejs as default project_type</span>
  <span class="nv">project_type</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">PROJECT_TYPE</span><span class="k">:-</span><span class="nv">nodejs</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="c"># REGEX to match the rc_branch name</span>
  <span class="nv">rc_branch_regex</span><span class="o">=</span><span class="s1">&#39;^release\/[0-9]\+\.[0-9]\+\.[0-9]\+$&#39;</span>
  <span class="c"># PATHS on the local ASSETS_DIR</span>
  <span class="nv">assets_dir</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">CI_PROJECT_DIR</span><span class="k">}</span><span class="s2">/</span><span class="k">${</span><span class="nv">ASSETS_DIR</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="nv">sr_local_plugin</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">assets_dir</span><span class="k">}</span><span class="s2">/local-plugin.cjs&#34;</span>
  <span class="nv">releaserc_tmpl</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">assets_dir</span><span class="k">}</span><span class="s2">/releaserc.json.tmpl&#34;</span>
  <span class="nv">pipeline_runtime_values_yaml</span><span class="o">=</span><span class="s2">&#34;/tmp/releaserc_values.yaml&#34;</span>
  <span class="nv">pipeline_values_yaml</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">assets_dir</span><span class="k">}</span><span class="s2">/values_</span><span class="k">${</span><span class="nv">project_type</span><span class="k">}</span><span class="s2">_project.yaml&#34;</span>
  <span class="c"># Destination PATH</span>
  <span class="nv">releaserc_json</span><span class="o">=</span><span class="s2">&#34;.releaserc.json&#34;</span>
  <span class="c"># Create an empty pipeline_values_yaml if missing</span>
  <span class="nb">test</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$pipeline_values_yaml</span><span class="s2">&#34;</span> <span class="o">||</span> : <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$pipeline_values_yaml</span><span class="s2">&#34;</span>
  <span class="c"># Create the pipeline_runtime_values_yaml file</span>
  <span class="nb">echo</span> <span class="s2">&#34;branch: </span><span class="k">${</span><span class="nv">CI_COMMIT_BRANCH</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span>
  <span class="nb">echo</span> <span class="s2">&#34;gitlab_url: </span><span class="k">${</span><span class="nv">CI_SERVER_URL</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span>
  <span class="c"># Add the rc_branch name if we are on an rc_branch</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$CI_COMMIT_BRANCH</span><span class="s2">&#34;</span> | <span class="nb">sed</span> <span class="nt">-ne</span> <span class="s2">&#34;/</span><span class="nv">$rc_branch_regex</span><span class="s2">/{p}&#34;</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;rc_branch: </span><span class="k">${</span><span class="nv">CI_COMMIT_BRANCH</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span>
  <span class="k">elif</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME</span><span class="s2">&#34;</span> |
      <span class="nb">sed</span> <span class="nt">-ne</span> <span class="s2">&#34;/</span><span class="nv">$rc_branch_regex</span><span class="s2">/{p}&#34;</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;rc_branch: </span><span class="k">${</span><span class="nv">CI_MERGE_REQUEST_SOURCE_BRANCH_NAME</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
      <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span>
  <span class="k">fi
  </span><span class="nb">echo</span> <span class="s2">&#34;sr_local_plugin: </span><span class="k">${</span><span class="nv">sr_local_plugin</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span>
  <span class="c"># Create the releaserc_json file</span>
  tmpl <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$pipeline_values_yaml</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="s2">&#34;</span><span class="nv">$releaserc_tmpl</span><span class="s2">&#34;</span> | jq <span class="nb">.</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$releaserc_json</span><span class="s2">&#34;</span>
  <span class="c"># Remove the pipeline_runtime_values_yaml file</span>
  <span class="nb">rm</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$pipeline_runtime_values_yaml</span><span class="s2">&#34;</span>
  <span class="c"># Print the releaserc_json file</span>
  print_file_collapsed <span class="s2">&#34;</span><span class="nv">$releaserc_json</span><span class="s2">&#34;</span>
  <span class="c"># --*-- BEG: NOTE --*--</span>
  <span class="c"># Rename the package.json to ignore it when calling semantic release.</span>
  <span class="c"># The idea is that the local-plugin renames it back on the first step of the</span>
  <span class="c"># semantic-release process.</span>
  <span class="c"># --*-- END: NOTE --*--</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;package.json&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;Renaming &#39;package.json&#39; to &#39;package.json_disabled&#39;&#34;</span>
    <span class="nb">mv</span> <span class="s2">&#34;package.json&#34;</span> <span class="s2">&#34;package.json_disabled&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span></code></pre></div>
<p>Almost all the variables used on the function are defined by <code>gitlab</code> except the
<code>ASSETS_DIR</code> and <code>PROJECT_TYPE</code>; in the complete pipelines the <code>ASSETS_DIR</code> is
defined on a common file included by all the pipelines and the project type is
defined on the <code>.gitlab-ci.yml</code> file of each project.</p>
<p>If you review the code you will see that the file processed by the <code>tmpl</code>
command is named <code>releaserc.json.tmpl</code>, its contents are shown here:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;plugins&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">if</span><span class="w"> </span><span class="err">.sr_local_plugin</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="s2">&#34;{{ .sr_local_plugin }}&#34;</span><span class="p">,</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">end</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="p">[</span><span class="w">
      </span><span class="s2">&#34;@semantic-release/commit-analyzer&#34;</span><span class="p">,</span><span class="w">
      </span><span class="p">{</span><span class="w">
        </span><span class="nl">&#34;preset&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;conventionalcommits&#34;</span><span class="p">,</span><span class="w">
        </span><span class="nl">&#34;releaseRules&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
          </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;breaking&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;major&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
          </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;revert&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;patch&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
          </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;type&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;feat&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;minor&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
          </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;type&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;fix&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;patch&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
          </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;type&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;perf&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;release&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;patch&#34;</span><span class="w"> </span><span class="p">}</span><span class="w">
        </span><span class="p">]</span><span class="w">
      </span><span class="p">}</span><span class="w">
    </span><span class="p">],</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">if</span><span class="w"> </span><span class="err">.replacements</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="p">[</span><span class="w">
      </span><span class="s2">&#34;semantic-release-replace-plugin&#34;</span><span class="p">,</span><span class="w">
      </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;replacements&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{{</span><span class="w"> </span><span class="err">.replacements</span><span class="w"> </span><span class="err">|</span><span class="w"> </span><span class="err">toJson</span><span class="w"> </span><span class="p">}}</span><span class="w"> </span><span class="p">}</span><span class="w">
    </span><span class="p">],</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">end</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="s2">&#34;@semantic-release/release-notes-generator&#34;</span><span class="p">,</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">if</span><span class="w"> </span><span class="err">eq</span><span class="w"> </span><span class="err">.branch</span><span class="w"> </span><span class="s2">&#34;main&#34;</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="p">[</span><span class="w">
      </span><span class="s2">&#34;@semantic-release/changelog&#34;</span><span class="p">,</span><span class="w">
      </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;changelogFile&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;CHANGELOG.md&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;changelogTitle&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;# Changelog&#34;</span><span class="w"> </span><span class="p">}</span><span class="w">
    </span><span class="p">],</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">end</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="p">[</span><span class="w">
      </span><span class="s2">&#34;@semantic-release/git&#34;</span><span class="p">,</span><span class="w">
      </span><span class="p">{</span><span class="w">
        </span><span class="nl">&#34;assets&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{{</span><span class="w"> </span><span class="err">if</span><span class="w"> </span><span class="err">.assets</span><span class="w"> </span><span class="p">}}{{</span><span class="w"> </span><span class="err">.assets</span><span class="w"> </span><span class="err">|</span><span class="w"> </span><span class="err">toJson</span><span class="w"> </span><span class="p">}}{{</span><span class="w"> </span><span class="err">else</span><span class="w"> </span><span class="p">}}[]{{</span><span class="w"> </span><span class="err">end</span><span class="w"> </span><span class="p">}},</span><span class="w">
        </span><span class="nl">&#34;message&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;ci(release): v${nextRelease.version}</span><span class="se">\n\n</span><span class="s2">${nextRelease.notes}&#34;</span><span class="w">
      </span><span class="p">}</span><span class="w">
    </span><span class="p">],</span><span class="w">
    </span><span class="p">[</span><span class="w">
      </span><span class="s2">&#34;@semantic-release/gitlab&#34;</span><span class="p">,</span><span class="w">
      </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;gitlabUrl&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;{{ .gitlab_url }}&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;successComment&#34;</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w"> </span><span class="p">}</span><span class="w">
    </span><span class="p">]</span><span class="w">
  </span><span class="p">],</span><span class="w">
  </span><span class="nl">&#34;branches&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;name&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;develop&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;prerelease&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;SNAPSHOT&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">if</span><span class="w"> </span><span class="err">.rc_branch</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="p">{</span><span class="w"> </span><span class="nl">&#34;name&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;{{ .rc_branch }}&#34;</span><span class="p">,</span><span class="w"> </span><span class="nl">&#34;prerelease&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;rc&#34;</span><span class="w"> </span><span class="p">},</span><span class="w">
    </span><span class="p">{{</span><span class="err">-</span><span class="w"> </span><span class="err">end</span><span class="w"> </span><span class="p">}}</span><span class="w">
    </span><span class="s2">&#34;main&#34;</span><span class="w">
  </span><span class="p">]</span><span class="w">
</span><span class="p">}</span></code></pre></div>
<p>The values used to process the template are defined on a file built on the fly
(<code>releaserc_values.yaml</code>) that includes the following keys and values:</p>
<div class="ulist"><ul><li><code>branch</code>: the name of the current branch</li><li><code>gitlab_url</code>: the URL of the gitlab server (the value is taken from the
<code>CI_SERVER_URL</code> variable)</li><li><code>rc_branch</code>: the name of the current rc branch; we only set the value if we
are processing one because <code>semantic-release</code> only allows one branch to match
the <code>rc</code> prefix and if we use a wildcard (i.e. <code>release/*</code>) but the users
keep more than one <code>release/#.#.#</code> branch open at the same time the calls to
<code>semantic-release</code> will fail for sure.</li><li><code>sr_local_plugin</code>: the path to the local plugin we use (shown later)</li></ul></div>
<p>The template also uses a <code>values_${project_type}_project.yaml</code> file that
includes settings specific to the project type, the one for <code>nodejs</code> is as
follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">replacements</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">files</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">package.json&#34;</span>
    <span class="na">from</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="se">\&#34;</span><span class="s">version</span><span class="se">\&#34;</span><span class="s">:</span><span class="nv"> </span><span class="se">\&#34;</span><span class="s">.*</span><span class="se">\&#34;</span><span class="s">&#34;</span>
    <span class="na">to</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="se">\&#34;</span><span class="s">version</span><span class="se">\&#34;</span><span class="s">:</span><span class="nv"> </span><span class="se">\&#34;</span><span class="s">${nextRelease.version}</span><span class="se">\&#34;</span><span class="s">&#34;</span>
<span class="na">assets</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">CHANGELOG.md&#34;</span>
  <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">package.json&#34;</span></code></pre></div>
<p>The <code>replacements</code> section is used to update the <code>version</code> field on the relevant
files of the project (in our case the <code>package.json</code> file) and the <code>assets</code>
section includes the files that will be committed to the repository when the
release is published (looking at the template you can see that the
<code>CHANGELOG.md</code> is only updated for the <code>main</code> branch, we do it this way because
if we update the file on other branches it creates a merge nightmare and we are
only interested on it for released versions anyway).</p>
<p>The local plugin adds code to rename the <code>package.json_disabled</code> file to
<code>package.json</code> if present and prints the last and next versions on the logs with
a format that can be easily parsed using <code>sed</code>:</p>
<figure class="listing-block"><figcaption>local-plugin.cjs</figcaption>
<pre class="rouge highlight"><code data-lang="javascript"><span class="c1">// Minimal plugin to:</span>
<span class="c1">// - rename the package.json_disabled file to package.json if present</span>
<span class="c1">// - log the semantic-release last &amp; next versions</span>
<span class="kd">function</span> <span class="nf">verifyConditions</span><span class="p">(</span><span class="nx">pluginConfig</span><span class="p">,</span> <span class="nx">context</span><span class="p">)</span> <span class="p">{</span>
  <span class="kd">var</span> <span class="nx">fs</span> <span class="o">=</span> <span class="nf">require</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">fs</span><span class="dl">&#39;</span><span class="p">);</span>
  <span class="k">if </span><span class="p">(</span><span class="nx">fs</span><span class="p">.</span><span class="nf">existsSync</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">package.json_disabled</span><span class="dl">&#39;</span><span class="p">))</span> <span class="p">{</span>
    <span class="nx">fs</span><span class="p">.</span><span class="nf">renameSync</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">package.json_disabled</span><span class="dl">&#39;</span><span class="p">,</span> <span class="dl">&#39;</span><span class="s1">package.json</span><span class="dl">&#39;</span><span class="p">);</span>
    <span class="nx">context</span><span class="p">.</span><span class="nx">logger</span><span class="p">.</span><span class="nf">log</span><span class="p">(</span><span class="s2">`verifyConditions: renamed &#39;package.json_disabled&#39; to &#39;package.json&#39;`</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">}</span>
<span class="kd">function</span> <span class="nf">analyzeCommits</span><span class="p">(</span><span class="nx">pluginConfig</span><span class="p">,</span> <span class="nx">context</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">if </span><span class="p">(</span><span class="nx">context</span><span class="p">.</span><span class="nx">lastRelease</span> <span class="o">&amp;&amp;</span> <span class="nx">context</span><span class="p">.</span><span class="nx">lastRelease</span><span class="p">.</span><span class="nx">version</span><span class="p">)</span> <span class="p">{</span>
    <span class="nx">context</span><span class="p">.</span><span class="nx">logger</span><span class="p">.</span><span class="nf">log</span><span class="p">(</span><span class="s2">`analyzeCommits: LAST_VERSION=</span><span class="p">${</span><span class="nx">context</span><span class="p">.</span><span class="nx">lastRelease</span><span class="p">.</span><span class="nx">version</span><span class="p">}</span><span class="s2">`</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">}</span>
<span class="kd">function</span> <span class="nf">verifyRelease</span><span class="p">(</span><span class="nx">pluginConfig</span><span class="p">,</span> <span class="nx">context</span><span class="p">)</span> <span class="p">{</span>
  <span class="k">if </span><span class="p">(</span><span class="nx">context</span><span class="p">.</span><span class="nx">nextRelease</span> <span class="o">&amp;&amp;</span> <span class="nx">context</span><span class="p">.</span><span class="nx">nextRelease</span><span class="p">.</span><span class="nx">version</span><span class="p">)</span> <span class="p">{</span>
    <span class="nx">context</span><span class="p">.</span><span class="nx">logger</span><span class="p">.</span><span class="nf">log</span><span class="p">(</span><span class="s2">`verifyRelease: NEXT_VERSION=</span><span class="p">${</span><span class="nx">context</span><span class="p">.</span><span class="nx">nextRelease</span><span class="p">.</span><span class="nx">version</span><span class="p">}</span><span class="s2">`</span><span class="p">);</span>
  <span class="p">}</span>
<span class="p">}</span>
<span class="nx">module</span><span class="p">.</span><span class="nx">exports</span> <span class="o">=</span> <span class="p">{</span>
  <span class="nx">verifyConditions</span><span class="p">,</span>
  <span class="nx">analyzeCommits</span><span class="p">,</span>
  <span class="nx">verifyRelease</span>
<span class="p">}</span></code></pre></figure></section>
<section class="doc-section level-2"><h3 id="_the_git_push_setup_function">The <code>git_push_setup</code> function</h3><p>The code for the <code>git_push_setup</code> function is the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="bash">git_push_setup<span class="o">()</span>
<span class="o">{</span>
  <span class="c"># Update global credentials to allow git clone &amp; push for all the group repos</span>
  git config <span class="nt">--global</span> credential.helper store
  <span class="nb">cat</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$HOME</span><span class="s2">/.git-credentials&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
https://fake-user:</span><span class="k">${</span><span class="nv">GITLAB_REPOSITORY_TOKEN</span><span class="k">}</span><span class="sh">@gitlab.com
</span><span class="no">EOF
</span>  <span class="c"># Define user name, mail and signing key for semantic-release</span>
  <span class="nv">user_name</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$SR_USER_NAME</span><span class="s2">&#34;</span>
  <span class="nv">user_email</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$SR_USER_EMAIL</span><span class="s2">&#34;</span>
  <span class="nv">ssh_signing_key</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$SSH_SIGNING_KEY</span><span class="s2">&#34;</span>
  <span class="c"># Export git user variables</span>
  <span class="nb">export </span><span class="nv">GIT_AUTHOR_NAME</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$user_name</span><span class="s2">&#34;</span>
  <span class="nb">export </span><span class="nv">GIT_AUTHOR_EMAIL</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$user_email</span><span class="s2">&#34;</span>
  <span class="nb">export </span><span class="nv">GIT_COMMITTER_NAME</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$user_name</span><span class="s2">&#34;</span>
  <span class="nb">export </span><span class="nv">GIT_COMMITTER_EMAIL</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$user_email</span><span class="s2">&#34;</span>
  <span class="c"># Sign commits with ssh if there is a SSH_SIGNING_KEY variable</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ssh_signing_key</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;Configuring GIT to sign commits with SSH&#34;</span>
    <span class="nv">ssh_keyfile</span><span class="o">=</span><span class="s2">&#34;/tmp/.ssh-id&#34;</span>
    : <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$ssh_keyfile</span><span class="s2">&#34;</span>
    <span class="nb">chmod </span>0400 <span class="s2">&#34;</span><span class="nv">$ssh_keyfile</span><span class="s2">&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$ssh_signing_key</span><span class="s2">&#34;</span> | <span class="nb">tr</span> <span class="nt">-d</span> <span class="s1">&#39;\r&#39;</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$ssh_keyfile</span><span class="s2">&#34;</span>
    git config gpg.format ssh
    git config user.signingkey <span class="s2">&#34;</span><span class="nv">$ssh_keyfile</span><span class="s2">&#34;</span>
    git config commit.gpgsign <span class="nb">true
  </span><span class="k">fi</span>
<span class="o">}</span></code></pre></div>
<p>The function assumes that the <code>GITLAB_REPOSITORY_TOKEN</code> variable (set on the
CI/CD variables section of the project or group we want) contains a token with
<code>read_repository</code> and <code>write_repository</code> permissions on all the projects we are
going to use this function.</p>
<p>The <code>SR_USER_NAME</code> and <code>SR_USER_EMAIL</code> variables can be defined on a common file
or the CI/CD variables section of the project or group we want to work with and
the script assumes that the optional <code>SSH_SIGNING_KEY</code> is exported as a CI/CD
default value of type variable (that is why the keyfile is created on the fly)
and <code>git</code> is configured to use it if the variable is not empty.</p>
<section class="admonition-block warning" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Warning: </span></h6><p>Keep in mind that the variables <code>GITLAB_REPOSITORY_TOKEN</code> and <code>SSH_SIGNING_KEY</code>
contain secrets, so probably is a good idea to make them <code>protected</code> (if you do
that you have to make the <code>develop</code>, <code>main</code> and <code>release/*</code> branches <code>protected</code>
too).</p></section>
<section class="admonition-block warning" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Warning: </span></h6><p>The <code>semantic-release</code> user has to be able to push to all the projects on those
protected branches, it is a good idea to create a dedicated user and add it as a
<code>MAINTAINER</code> for the projects we want (the <code>MAINTAINERS</code> need to be able to push
to the branches), or, if you are using a Gitlab with a <strong>Premium</strong> license you can
use the
<a href="https://docs.gitlab.com/ee/api/protected_branches.html#protect-repository-branches" target="_blank" rel="noopener">api</a>
to allow the <code>semantic-release</code> user to push to the protected branches without
allowing it for any other user.</p></section></section>
<section class="doc-section level-2"><h3 id="_the_semantic_release_command">The <code>semantic-release</code> command</h3><p>Once we have the <code>.releaserc</code> file and the <code>git</code> configuration ready we run the
<code>semantic-release</code> command.</p>
<p>If the branch we are working with has one or more commits that will increment
the version, the tool does the following (note that the steps are described are
the ones executed if we use the configuration we have generated):</p>
<div class="olist arabic"><ol class="arabic"><li>It detects the commits that will increment the version and calculates the
next version number.</li><li>Generates the release notes for the version.</li><li>Applies the replacements defined on the configuration (in our example updates
the <code>version</code> field on the <code>package.json</code> file).</li><li>Updates the <code>CHANGELOG.md</code> file adding the release notes if we are going to
publish the file (when we are on the <code>main</code> branch).</li><li>Creates a commit if all or some of the files listed on the <code>assets</code> key have
changed and uses the commit message we have defined, replacing the variables
for their current values.</li><li>Creates a tag with the new version number and the release notes.</li><li>As we are using the <code>gitlab</code> plugin after tagging it also creates a release
on the project with the tag name and the release notes.</li></ol></div></section></section>
<section class="doc-section level-1"><h2 id="_notes_about_the_git_workflows_and_merges_between_branches">Notes about the <code>git</code> workflows and merges between branches</h2><p>It is very important to remember that <code>semantic-release</code> looks at the commits of
a given branch when calculating the next version to publish, that has two
important implications:</p>
<div class="olist arabic"><ol class="arabic"><li>On <em>pre release</em> branches we need to have the commit that includes the tag
with the released version, if we don’t have it the next version is not
calculated correctly.</li><li>It is a bad idea to squash commits when merging a branch to another one, if
we do that we will lose the information <code>semantic-release</code> needs to calculate
the next version and even if we use the right prefix for the squashed commit
(<code>fix</code>, <code>feat</code>, …​) we miss all the messages that would otherwise go to the
<code>CHANGELOG.md</code> file.</li></ol></div>
<p>To make sure that we have the right commits on the <em>pre release</em> branches we
should merge the <code>main</code> branch changes into the <code>develop</code> one after each release
tag is created; in my pipelines the fist job that processes a release tag
creates a branch from the tag and an MR to merge it to <code>develop</code>.</p>
<p>The important thing about that MR is that is must not be squashed, if we do that
the tag commit will probably be lost, so we need to be careful.</p>
<p>To merge the changes directly we can run the following code:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="c"># Set the SR_TAG variable to the tag you want to process</span>
<span class="nv">SR_TAG</span><span class="o">=</span><span class="s2">&#34;v1.3.2&#34;</span>
<span class="c"># Fetch all the changes</span>
git fetch <span class="nt">--all</span> <span class="nt">--prune</span>
<span class="c"># Switch to the main branch</span>
git switch main
<span class="c"># Pull all the changes</span>
git pull
<span class="c"># Switch to the development branch</span>
git switch develop
<span class="c"># Pull all the changes</span>
git pull
<span class="c"># Create followup branch from tag</span>
git switch <span class="nt">-c</span> <span class="s2">&#34;followup/</span><span class="nv">$SR_TAG</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$SR_TAG</span><span class="s2">&#34;</span>
<span class="c"># Change files manually &amp; commit the changed files</span>
git commit <span class="nt">-a</span> <span class="nt">--untracked-files</span><span class="o">=</span>no <span class="nt">-m</span> <span class="s2">&#34;ci(followup): </span><span class="nv">$SR_TAG</span><span class="s2"> to develop&#34;</span>
<span class="c"># Switch to the development branch</span>
git switch develop
<span class="c"># Merge the followup branch into the development one using the --no-ff option</span>
git merge <span class="nt">--no-ff</span> <span class="s2">&#34;followup/</span><span class="nv">$SR_TAG</span><span class="s2">&#34;</span>
<span class="c"># Remove the followup branch</span>
git branch <span class="nt">-d</span> <span class="s2">&#34;followup/</span><span class="nv">$SR_TAG</span><span class="s2">&#34;</span>
<span class="c"># Push the changes</span>
git push</code></pre></div>
<p>If we can’t push directly to <code>develop</code> we can create a MR pushing the followup
branch after committing the changes, but we have to make sure that we don’t
squash the commits when merging or it will not work as we want.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>We haven’t discussed the <code>release/#.#.#</code> branches because our assumption is
that they don’t exist after a <em>release</em> is published (the branch is deleted
after merging it to <code>main</code>) and new <em>release candidate</em> branches will be created
from <code>develop</code> and the commits included with the followup will already be
present.</p></aside></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>GitLab CI/CD Tips: Using Rule Templates</title>
      <link>https://blogops.mixinet.net/posts/gitlab-ci/rule_templates/</link>
      <pubDate>Sun, 24 Sep 2023 00:14:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/gitlab-ci/rule_templates/</guid>
      <description>&lt;p&gt;This post describes how to define and use rule templates with &lt;em&gt;semantic names&lt;/em&gt; using &lt;code&gt;extends&lt;/code&gt; or &lt;code&gt;!reference&lt;/code&gt; tags, how
to define manual &lt;em&gt;jobs&lt;/em&gt; using the same templates and how to use &lt;a href=&#34;https://docs.gitlab.com/ee/ci/yaml/inputs.html&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;gitlab-ci
inputs&lt;/a&gt; as macros to give names to regular expressions used by rules.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_basic_rule_templates&#34;&gt;Basic rule templates&lt;/h2&gt;&lt;p&gt;I keep my templates in a &lt;code&gt;rules.yml&lt;/code&gt; file stored on a common repository used from different projects as I mentioned on
my previous &lt;a href=&#34;../common_ci_repo_with_assets/&#34;&gt;post&lt;/a&gt;, but they can be defined anywhere, the important thing is that the
files that need them include their definition somehow.&lt;/p&gt;
&lt;p&gt;The first version of my &lt;code&gt;rules.yml&lt;/code&gt; file was as follows:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;na&#34;&gt;.rules_common&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;c1&#34;&gt;# Common rules; we include them from others instead of forcing a workflow&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;rules&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;c1&#34;&gt;# Disable branch pipelines while there is an open merge request from it&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;if&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;&amp;gt;-&lt;/span&gt;
        &lt;span class=&#34;s&#34;&gt;$CI_COMMIT_BRANCH &amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class=&#34;s&#34;&gt;$CI_OPEN_MERGE_REQUESTS &amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class=&#34;s&#34;&gt;$CI_PIPELINE_SOURCE != &amp;#34;merge_request_event&amp;#34;&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;when&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;never&lt;/span&gt;

&lt;span class=&#34;na&#34;&gt;.rules_default&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;c1&#34;&gt;# Default rules, we need to add the when: on_success to make things work&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;rules&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;kt&#34;&gt;!reference&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;.rules_common&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;rules&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;]&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;when&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;on_success&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>This post describes how to define and use rule templates with <em>semantic names</em> using <code>extends</code> or <code>!reference</code> tags, how
to define manual <em>jobs</em> using the same templates and how to use <a href="https://docs.gitlab.com/ee/ci/yaml/inputs.html" target="_blank" rel="noopener">gitlab-ci
inputs</a> as macros to give names to regular expressions used by rules.</p>
<section class="doc-section level-1"><h2 id="_basic_rule_templates">Basic rule templates</h2><p>I keep my templates in a <code>rules.yml</code> file stored on a common repository used from different projects as I mentioned on
my previous <a href="../common_ci_repo_with_assets/">post</a>, but they can be defined anywhere, the important thing is that the
files that need them include their definition somehow.</p>
<p>The first version of my <code>rules.yml</code> file was as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">.rules_common</span><span class="pi">:</span>
  <span class="c1"># Common rules; we include them from others instead of forcing a workflow</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="c1"># Disable branch pipelines while there is an open merge request from it</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
        <span class="s">$CI_COMMIT_BRANCH &amp;&amp;</span>
        <span class="s">$CI_OPEN_MERGE_REQUESTS &amp;&amp;</span>
        <span class="s">$CI_PIPELINE_SOURCE != &#34;merge_request_event&#34;</span>
      <span class="na">when</span><span class="pi">:</span> <span class="s">never</span>

<span class="na">.rules_default</span><span class="pi">:</span>
  <span class="c1"># Default rules, we need to add the when: on_success to make things work</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">when</span><span class="pi">:</span> <span class="s">on_success</span></code></pre></div>
<p>The main idea is that <code>.rules_common</code> defines a <code>rule</code> section to disable jobs as we can do on a <code>workflow</code> definition;
in our case common rules only have <code>if</code> rules that apply to all jobs and are used to disable them. The example includes
one that avoids creating duplicated jobs when we push to a branch that is the source of an open MR as explained
<a href="https://docs.gitlab.com/ee/ci/yaml/workflow.html#switch-between-branch-pipelines-and-merge-request-pipelines" target="_blank" rel="noopener">here</a>.</p>
<p>To use the rules in a job we have two options, use the <code>extends</code> keyword (we do that when we want to use the rule as is)
or declare a <code>rules</code> section and add a <code>!reference</code> to the template we want to use as described
<a href="https://docs.gitlab.com/ee/ci/jobs/job_control.html#reuse-rules-in-different-jobs" target="_blank" rel="noopener">here</a> (we do that when we want to add
additional rules to disable a job before evaluating the template conditions).</p>
<p>As an example, with the following definitions both jobs use the same rules:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">job_1</span><span class="pi">:</span>
  <span class="na">extends</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">.rules_default</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">job_2</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_default</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_manual_jobs_and_rule_templates">Manual jobs and rule templates</h2><p>To make the jobs manual we have two options, create a version of the job that includes <code>when: manual</code> and defines if we
want it to be optional or not (<code>allow_failure: true</code> makes the job optional, if we don’t add that to the rule the job is
blocking) or add the <code>when: manual</code> and the <code>allow_failure</code> value to the job (if we work at the job level the default
value for <code>allow_failure</code> is <code>false</code> for <code>when: manual</code>, so it is optional by default, we have to add an explicit
<code>allow_failure = true</code> it to make it blocking).</p>
<p>The following example shows how we define blocking or optional manual jobs using rules with <code>when</code> conditions:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">.rules_default_manual_blocking</span><span class="pi">:</span>
  <span class="c1"># Default rules for optional manual jobs</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">when</span><span class="pi">:</span> <span class="s">manual</span>
      <span class="c1"># allow_failure: false is implicit</span>

<span class="na">.rules_default_manual_optional</span><span class="pi">:</span>
  <span class="c1"># Default rules for optional manual jobs</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">when</span><span class="pi">:</span> <span class="s">manual</span>
      <span class="na">allow_failure</span><span class="pi">:</span> <span class="kc">true</span>

<span class="na">manual_blocking_job</span><span class="pi">:</span>
  <span class="na">extends</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">.rules_default_manual_blocking</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">manual_optional_job</span><span class="pi">:</span>
  <span class="na">extends</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">.rules_default_manual_optional</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span></code></pre></div>
<p>The problem here is that we have to create new versions of the same rule template to add the conditions, but we can
avoid it using the keywords at the job level with the original rules to get the same effect; the following definitions
create jobs equivalent to the ones defined earlier without creating additional templates:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">manual_blocking_job</span><span class="pi">:</span>
  <span class="na">extends</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">.rules_default</span>
  <span class="na">when</span><span class="pi">:</span> <span class="s">manual</span>
  <span class="na">allow_failure</span><span class="pi">:</span> <span class="kc">false</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">manual_optional_job</span><span class="pi">:</span>
  <span class="na">extends</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">.rules_default</span>
  <span class="na">when</span><span class="pi">:</span> <span class="s">manual</span>
  <span class="c1"># allow_failure: true is implicit</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span></code></pre></div>
<p>As you can imagine, that is my preferred way of doing it, as it keeps the <code>rules.yml</code> file smaller and I see that the
job is manual in its definition without problem.</p></section>
<section class="doc-section level-1"><h2 id="_rules_with_allow_failure_changes_exists_needs_or_variables">Rules with <code>allow_failure</code>, <code>changes</code>, <code>exists</code>, <code>needs</code> or <code>variables</code></h2><p>Unluckily for us, for now there is no way to avoid creating additional templates as we did on the <code>when: manual</code> case
when a rule is similar to an existing one but adds <a href="https://docs.gitlab.com/ee/ci/yaml/index.html#ruleschanges">changes</a>,
<a href="https://docs.gitlab.com/ee/ci/yaml/index.html#rulesexists">exists</a>,
<a href="https://docs.gitlab.com/ee/ci/yaml/index.html#rulesneeds">needs</a> or
<a href="https://docs.gitlab.com/ee/ci/yaml/index.html#rulesvariables">variables</a> to it.</p>
<p>So, for now, if a rule needs to add any of those fields we have to copy the original rule and add the keyword section.</p>
<p>Some notes, though:</p>
<div class="ulist"><ul><li>we only need to add <code>allow_failure</code> if we want to change its value for a given condition, in other cases we can set
the value at the job level.</li><li>if we are adding <code>changes</code> to the rule it is important to make sure that they are going to be evaluated as explained
<a href="https://docs.gitlab.com/ee/ci/jobs/job_control.html#jobs-or-pipelines-run-unexpectedly-when-using-changes" target="_blank" rel="noopener">here</a>.</li><li>when we add a <code>needs</code> value to a rule for a specific condition and it matches it replaces the job <code>needs</code> section;
when using templates I would use two different job names with different conditions instead of adding a <code>needs</code> on a
single job.</li></ul></div></section>
<section class="doc-section level-1"><h2 id="_defining_rule_templates_with_semantic_names">Defining rule templates with semantic names</h2><p>I started to use rule templates to avoid repetition when defining jobs that needed the same rules and soon I noticed
that giving them names with a semantic meaning they where easier to use and understand (we provide a name that tells us
when we are going to execute the job, while the details of the variables names or values used on the rules are an
implementation detail of the templates).</p>
<p>We are not going to define real jobs on this post, but as an example we are going to define a set of rules that can be
useful if we plan to follow a <a href="https://trunkbaseddevelopment.com/#scaled-trunk-based-development" target="_blank" rel="noopener">scaled trunk based
development</a> workflow when developing, that is, we are going to put the releasable code on the <code>main</code> branch and use
short-lived branches to test and complete changes before pushing things to <code>main</code>.</p>
<p>Using this approach we can define an initial set of rule templates with semantic names:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">.rules_mr_to_main</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == &#39;main&#39;</span>

<span class="na">.rules_mr_or_push_to_main</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == &#39;main&#39;</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
        <span class="s">$CI_COMMIT_BRANCH == &#39;main&#39;</span>
        <span class="s">&amp;&amp;</span>
        <span class="s">$CI_PIPELINE_SOURCE != &#39;merge_request_event&#39;</span>

<span class="na">.rules_push_to_main</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
        <span class="s">$CI_COMMIT_BRANCH == &#39;main&#39;</span>
        <span class="s">&amp;&amp;</span>
        <span class="s">$CI_PIPELINE_SOURCE != &#39;merge_request_event&#39;</span>

<span class="na">.rules_push_to_branch</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
        <span class="s">$CI_COMMIT_BRANCH != &#39;main&#39;</span>
        <span class="s">&amp;&amp;</span>
        <span class="s">$CI_PIPELINE_SOURCE != &#39;merge_request_event&#39;</span>

<span class="na">.rules_push_to_branch_or_mr_to_main</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_push_to_branch</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
         <span class="s">$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME != &#39;main&#39;</span>
         <span class="s">&amp;&amp;</span>
         <span class="s">$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == &#39;main&#39;</span>

<span class="na">.rules_release_tag</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$CI_COMMIT_TAG =~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/</span>

<span class="na">.rules_non_release_tag</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$CI_COMMIT_TAG !~ /^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/</span></code></pre></div>
<p>With those names it is clear when a job is going to be executed and when using the templates on real jobs we can add
additional restrictions and make the execution manual if needed as described earlier.</p></section>
<section class="doc-section level-1"><h2 id="_using_inputs_as_macros">Using <code>inputs</code> as macros</h2><p>On the previous rules we have used a regular expression to identify the <em>release tag</em> format and assumed that the
general <em>branches</em> are the ones with a name different than <code>main</code>; if we want to force a format for those branch names
we can replace the condition <code>!= &#39;main&#39;</code> by a regex comparison (<code>=~</code> if we look for matches, <code>!~</code> if we want to define
valid branch names removing the invalid ones).</p>
<p>When testing the new <a href="https://docs.gitlab.com/ee/ci/yaml/inputs.html" target="_blank" rel="noopener">gitlab-ci inputs</a> my colleague Jorge noticed that
if you keep their <em>default</em> value they basically work as macros.</p>
<p>The variables declared as <code>inputs</code> can’t hold YAML values, the truth is that their value is always a string that is
replaced by the value assigned to them when including the file (if given) or by their default value, if defined.</p>
<p>If you don’t assign a value to an input variable when including the file that declares it its occurrences are replaced
by its default value, making them work basically as macros; this is useful for us when working with strings that can’t
managed as variables, like the regular expressions used inside <code>if</code> conditions.</p>
<p>With those two ideas we can add the following prefix to the <code>rules.yaml</code> defining inputs for both regular expressions
and replace the rules that can use them by the ones shown here:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">spec</span><span class="pi">:</span>
  <span class="na">inputs</span><span class="pi">:</span>
    <span class="c1"># Regular expression for branches; the prefix matches the type of changes</span>
    <span class="c1"># we plan to work on inside the branch (we use conventional commit types as</span>
    <span class="c1"># the branch prefix)</span>
    <span class="na">branch_regex</span><span class="pi">:</span>
      <span class="na">default</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">/^(build|ci|chore|docs|feat|fix|perf|refactor|style|test)\/.+$/&#39;</span>
    <span class="c1"># Regular expression for tags</span>
    <span class="na">release_tag_regex</span><span class="pi">:</span>
      <span class="na">default</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">/^([0-9a-zA-Z_.-]+-)?v\d+.\d+.\d+$/&#39;</span>
<span class="nn">---</span>
<span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">.rules_push_to_changes_branch</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
        <span class="s">$CI_COMMIT_BRANCH =~ $[[ inputs.branch_regex ]]</span>
        <span class="s">&amp;&amp;</span>
        <span class="s">$CI_PIPELINE_SOURCE != &#39;merge_request_event&#39;</span>

<span class="na">.rules_push_to_branch_or_mr_to_main</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_push_to_branch</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="pi">&gt;-</span>
         <span class="s">$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ $[[ inputs.branch_regex ]]</span>
         <span class="s">&amp;&amp;</span>
         <span class="s">$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == &#39;main&#39;</span>

<span class="na">.rules_release_tag</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$CI_COMMIT_TAG =~ $[[ inputs.release_tag_regex ]]</span>

<span class="na">.rules_non_release_tag</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_common</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$CI_COMMIT_TAG !~ $[[ inputs.release_tag_regex ]]</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_creating_rules_reusing_existing_ones">Creating rules reusing existing ones</h2><p>I’m going to finish this post with a comment about how I avoid defining extra rule templates in some common cases.</p>
<p>The idea is simple, we can use <code>!reference</code> tags to fine tune rules when we need to add conditions to disable them
simply adding conditions with <code>when: never</code> before referencing the template.</p>
<p>As an example, in some projects I’m using different job definitions depending on the <code>DEPLOY_ENVIRONMENT</code> value to make
the job manual or automatic; as we just said we can define different jobs referencing the same rule adding a condition
to check if the environment is the one we are interested in:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">deploy_job_auto</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="c1"># Only deploy automatically if the environment is &#39;dev&#39; by skipping this job</span>
    <span class="c1"># for other values of the DEPLOY_ENVIRONMENT variable</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$DEPLOY_ENVIRONMENT != &#34;dev&#34;</span>
      <span class="na">when</span><span class="pi">:</span> <span class="s">never</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_release_tag</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span>

<span class="na">deploy_job_manually</span><span class="pi">:</span>
  <span class="na">rules</span><span class="pi">:</span>
    <span class="c1"># Disable this job if the environment is &#39;dev&#39;</span>
    <span class="pi">-</span> <span class="na">if</span><span class="pi">:</span> <span class="s">$DEPLOY_ENVIRONMENT == &#34;dev&#34;</span>
      <span class="na">when</span><span class="pi">:</span> <span class="s">never</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.rules_release_tag</span><span class="pi">,</span> <span class="nv">rules</span><span class="pi">]</span>
  <span class="na">when</span><span class="pi">:</span> <span class="s">manual</span>
  <span class="c1"># Change this to `false` to make the deployment job blocking</span>
  <span class="na">allow_failure</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="pi">[</span><span class="nv">...</span><span class="pi">]</span></code></pre></div>
<p>If you think about it the idea of adding negative conditions is what we do with the <code>.rules_common</code> template; we add
conditions to disable the job before evaluating the real rules.</p>
<p>The difference in that case is that we reference them at the beginning because we want those negative conditions on all
jobs and that is also why we have a <code>.rules_default</code> condition with an <code>when: on_success</code> for the jobs that only need to
respect the default workflow (we need the last condition to make sure that they are executed if the negative rules don’t
match).</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>GitLab CI/CD Tips: Using a Common CI Repository with Assets</title>
      <link>https://blogops.mixinet.net/posts/gitlab-ci/common_ci_repo_with_assets/</link>
      <pubDate>Sun, 17 Sep 2023 00:23:20 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/gitlab-ci/common_ci_repo_with_assets/</guid>
      <description>&lt;p&gt;This post describes how to handle files that are used as &lt;em&gt;assets&lt;/em&gt; by jobs and pipelines defined on a common gitlab-ci
repository when we include those definitions from a different project.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_problem_description&#34;&gt;Problem description&lt;/h2&gt;&lt;p&gt;When a &lt;code&gt;.giltlab-ci.yml&lt;/code&gt; file &lt;a href=&#34;https://docs.gitlab.com/ee/ci/yaml/includes.html&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;includes&lt;/a&gt; files from a different
repository its contents are expanded and the resulting code is the same as the one generated when the included files
are local to the repository.&lt;/p&gt;
&lt;p&gt;In fact, even when the remote files include other files everything works right, as they are also expanded (see the
description of how included files are &lt;a href=&#34;https://docs.gitlab.com/ee/ci/yaml/includes.html#merge-method-for-include&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;merged&lt;/a&gt;
for a complete explanation), allowing us to organise the common repository as we want.&lt;/p&gt;
&lt;p&gt;As an example, suppose that we have the following script on the &lt;code&gt;assets/&lt;/code&gt; folder of the common repository:&lt;/p&gt;
&lt;figure class=&#34;listing-block&#34;&gt;&lt;figcaption&gt;&lt;code&gt;dumb.sh&lt;/code&gt;&lt;/figcaption&gt;
&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;shell&#34;&gt;&lt;span class=&#34;c&#34;&gt;#!/bin/sh&lt;/span&gt;
&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;The script arguments are: &amp;#39;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$@&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#39;&amp;#34;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;
&lt;p&gt;If we run the following job on the common repository:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;na&#34;&gt;job&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;script&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;$CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>This post describes how to handle files that are used as <em>assets</em> by jobs and pipelines defined on a common gitlab-ci
repository when we include those definitions from a different project.</p>
<section class="doc-section level-1"><h2 id="_problem_description">Problem description</h2><p>When a <code>.giltlab-ci.yml</code> file <a href="https://docs.gitlab.com/ee/ci/yaml/includes.html" target="_blank" rel="noopener">includes</a> files from a different
repository its contents are expanded and the resulting code is the same as the one generated when the included files
are local to the repository.</p>
<p>In fact, even when the remote files include other files everything works right, as they are also expanded (see the
description of how included files are <a href="https://docs.gitlab.com/ee/ci/yaml/includes.html#merge-method-for-include" target="_blank" rel="noopener">merged</a>
for a complete explanation), allowing us to organise the common repository as we want.</p>
<p>As an example, suppose that we have the following script on the <code>assets/</code> folder of the common repository:</p>
<figure class="listing-block"><figcaption><code>dumb.sh</code></figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="nb">echo</span> <span class="s2">&#34;The script arguments are: &#39;</span><span class="nv">$@</span><span class="s2">&#39;&#34;</span></code></pre></figure>
<p>If we run the following job on the common repository:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">job</span><span class="pi">:</span>
  <span class="na">script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">$CI_PROJECT_DIR/assets/dumb.sh ARG1 ARG2</span></code></pre></div>
<p>the output will be:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">The script arguments are: <span class="s1">&#39;ARG1 ARG2&#39;</span></code></pre></div>
<p>But if we run the same job from a different project that includes the same job definition the output will be different:</p>
<div class="listing-block"><pre class="rouge highlight"><code>/scripts-23-19051/step_script: eval: line 138: d./assets/dumb.sh: not found</code></pre></div>
<p>The problem here is that we include and expand the <code>YAML</code> files, but if a script wants to use other files from the
common repository as an <em>asset</em> (configuration file, shell script, template, etc.), the execution fails if the files are
not available on the project that includes the remote job definition.</p></section>
<section class="doc-section level-1"><h2 id="_solutions">Solutions</h2><p>We can solve the issue using multiple approaches, I’ll describe two of them:</p>
<div class="ulist"><ul><li>Create files using scripts</li><li>Download files from the common repository</li></ul></div>
<section class="doc-section level-2"><h3 id="_create_files_using_scripts">Create files using scripts</h3><p>One way to dodge the issue is to generate the non YAML files from scripts included on the pipelines using
<a href="https://en.wikipedia.org/wiki/Here_document" target="_blank" rel="noopener">HERE documents</a>.</p>
<p>The problem with this approach is that we have to put the content of the files inside a script on a YAML file and if it
uses characters that can be replaced by the shell (remember, we are using HERE documents) we have to escape them (error
prone) or encode the whole file into <code>base64</code> or something similar, making maintenance harder.</p>
<p>As an example, imagine that we want to use the <code>dumb.sh</code> script presented on the previous section and we want to call it
from the same PATH of the main project (on the examples we are using the same folder, in practice we can create a hidden
folder inside the project directory or use a PATH like <code>/tmp/assets-$CI_JOB_ID</code> to leave things outside the project
folder and make sure that there will be no collisions if two jobs are executed on the same place (i.e. when using a ssh
runner).</p>
<p>To create the file we will use <a href="https://docs.gitlab.com/ee/ci/jobs/index.html#hide-jobs" target="_blank" rel="noopener">hidden jobs</a> to write our script
template and <a href="https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#reference-tags" target="_blank" rel="noopener">reference tags</a> to add it to the
scripts when we want to use them.</p>
<p>Here we have a snippet that creates the file with <code>cat</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">.file_scripts</span><span class="pi">:</span>
  <span class="na">create_dumb_sh</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="pi">|</span>
      <span class="s"># Create dumb.sh script</span>
      <span class="s">mkdir -p &#34;${CI_PROJECT_DIR}/assets&#34;</span>
      <span class="s">cat &gt;&#34;${CI_PROJECT_DIR}/assets/dumb.sh&#34; &lt;&lt;EOF</span>
      <span class="s">#!/bin/sh</span>
      <span class="s">echo &#34;The script arguments are: &#39;\$@&#39;&#34;</span>
      <span class="s">EOF</span>
      <span class="s">chmod +x &#34;${CI_PROJECT_DIR}/assets/dumb.sh&#34;</span></code></pre></div>
<p>Note that to make things work we’ve added 6 spaces before the script code and escaped the dollar sign.</p>
<p>To do the same using <code>base64</code> we replace the previous snippet by this:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">.file_scripts</span><span class="pi">:</span>
  <span class="na">create_dumb_sh</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="pi">|</span>
      <span class="s"># Create dumb.sh script</span>
      <span class="s">mkdir -p &#34;${CI_PROJECT_DIR}/assets&#34;</span>
      <span class="s">base64 -d &gt;&#34;${CI_PROJECT_DIR}/assets/dumb.sh&#34; &lt;&lt;EOF</span>
      <span class="s">IyEvYmluL3NoCmVjaG8gIlRoZSBzY3JpcHQgYXJndW1lbnRzIGFyZTogJyRAJyIK</span>
      <span class="s">EOF</span>
      <span class="s">chmod +x &#34;${CI_PROJECT_DIR}/assets/dumb.sh&#34;</span></code></pre></div>
<p>Again, we have to indent the <code>base64</code> version of the file using 6 spaces (all lines of the <code>base64</code> output have to be
indented) and to make changes we have to decode and re-code the file manually, making it harder to maintain.</p>
<p>With either version we just need to add a <code>!reference</code> before using the script, if we add the call on the first lines of
the <code>before_script</code> we can use the downloaded file in the <code>before_script</code>, <code>script</code> or <code>after_script</code> sections of the
job without problems:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">job</span><span class="pi">:</span>
  <span class="na">before_script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.file_scripts</span><span class="pi">,</span> <span class="nv">create_dumb_sh</span><span class="pi">]</span>
  <span class="na">script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">${CI_PROJECT_DIR}/assets/dumb.sh ARG1 ARG2</span></code></pre></div>
<p>The output of a pipeline that uses this job will be the same as the one shown in the original example:</p>
<div class="listing-block"><pre class="rouge highlight"><code>The script arguments are: &#39;ARG1 ARG2&#39;</code></pre></div></section>
<section class="doc-section level-2"><h3 id="_download_the_files_from_the_common_repository">Download the files from the common repository</h3><p>As we’ve seen the previous solution works but is not ideal as it makes the files harder to read, maintain and use.</p>
<p>An alternative approach is to keep the <em>assets</em> on a directory of the common repository (in our examples we will name it
<code>assets</code>) and prepare a YAML file that declares some variables (i.e. the URL of the templates project and the PATH where
we want to download the files) and defines a script fragment to download the complete folder.</p>
<p>Once we have the YAML file we just need to include it and add a reference to the script fragment at the beginning of the
<code>before_script</code> of the jobs that use files from the <code>assets</code> directory and they will be available when needed.</p>
<p>The following file is an example of the YAML file we just mentioned:</p>
<figure class="listing-block"><figcaption>bootstrap.yml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">variables</span><span class="pi">:</span>
  <span class="na">CI_TMPL_API_V4_URL</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">${CI_API_V4_URL}/projects/common%2Fci-templates&#34;</span>
  <span class="na">CI_TMPL_ARCHIVE_URL</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">${CI_TMPL_API_V4_URL}/repository/archive&#34;</span>
  <span class="na">CI_TMPL_ASSETS_DIR</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">/tmp/assets-${CI_JOB_ID}&#34;</span>

<span class="na">.scripts_common</span><span class="pi">:</span>
  <span class="na">bootstrap_ci_templates</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="pi">|</span>
      <span class="s"># Downloading assets</span>
      <span class="s">echo &#34;Downloading assets&#34;</span>
      <span class="s">mkdir -p &#34;$CI_TMPL_ASSETS_DIR&#34;</span>
      <span class="s">wget -q -O - --header=&#34;PRIVATE-TOKEN: $CI_TMPL_READ_TOKEN&#34; \</span>
        <span class="s">&#34;$CI_TMPL_ARCHIVE_URL?path=assets&amp;sha=${CI_TMPL_REF:-main}&#34; |</span>
        <span class="s">tar --strip-components 2 -C &#34;$ASSETS_DIR&#34; -xzf -</span></code></pre></figure>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The script fragment that downloads the folder uses <code>wget</code> and <code>tar</code> because both tools are available on <code>alpine</code>
containers and on <code>debian</code> based distributions we need to install a tool to download files from the web anyway (if we
use <code>curl</code> instead of <code>wget</code> we are forced to install the package on <code>alpine</code> images too).</p></aside>
<p>The file defines the following variables:</p>
<div class="ulist"><ul><li><code>CI_TMPL_API_V4_URL</code>: URL of the common project, in our case we are using the project <code>ci-templates</code> inside the
<code>common</code> group (note that the slash between the group and the project is escaped, that is needed to reference the
project by name, if we don’t like that approach we can replace the url encoded path by the project id, i.e. we could
use a value like <code>${CI_API_V4_URL}/projects/31</code>)</li><li><code>CI_TMPL_ARCHIVE_URL</code>: Base URL to use the gitlab API to download files from a repository, we will add the arguments
<code>path</code> and <code>sha</code> to select which sub path to download and from which commit, branch or tag (we will explain later why
we use the <code>CI_TMPL_REF</code>, for now just keep in mind that if it is not defined we will download the version of the
files available on the <code>main</code> branch when the job is executed).</li><li><code>CI_TMPL_ASSETS_DIR</code>: Destination of the downloaded files.</li></ul></div>
<p>And uses variables defined in other places:</p>
<div class="ulist"><ul><li><p><code>CI_TMPL_READ_TOKEN</code>: token that includes the <code>read_api</code> scope for the common project, we need it because the
tokens created by the CI/CD pipelines of other projects can’t be used to access the api of the common one.</p><p>We define the variable on the gitlab CI/CD variables section to be able to change it if needed (i.e. if it expires)</p></li><li><p><code>CI_TMPL_REF</code>: branch or tag of the common repo from which to get the files (we need that to make sure we are using
the right version of the files, i.e. when testing we will use a branch and on production pipelines we can use fixed
tags to make sure that the assets don’t change between executions unless we change the reference).</p><p>We will set the value on the <code>.gitlab-ci.yml</code> file of the remote projects and will use the same reference when including
the files to make sure that everything is coherent.</p></li></ul></div>
<p>This is an example YAML file that defines a pipeline with a job that uses the script from the common repository:</p>
<figure class="listing-block"><figcaption><code>pipeline.yml</code></figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">include</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">/bootstrap.yaml</span>
<span class="na">stages</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">test</span>
<span class="na">dumb_job</span><span class="pi">:</span>
  <span class="na">stage</span><span class="pi">:</span> <span class="s">test</span>
  <span class="na">before_script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="kt">!reference</span> <span class="pi">[</span><span class="nv">.bootstrap_ci_templates</span><span class="pi">,</span> <span class="nv">create_dumb_sh</span><span class="pi">]</span>
  <span class="na">script</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">${CI_TMPL_ASSETS_DIR}/dumb.sh ARG1 ARG2</span></code></pre></figure>
<p>To use it from an external project we will use the following gitlab ci configuration:</p>
<figure class="listing-block"><figcaption><code>gitlab-ci.yml</code></figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">include</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">project</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">common/ci-templates&#39;</span>
    <span class="na">ref</span><span class="pi">:</span> <span class="nl">&amp;ciTmplRef</span> <span class="s1">&#39;</span><span class="s">main&#39;</span>
    <span class="na">file</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">/pipeline.yml&#39;</span>

<span class="na">variables</span><span class="pi">:</span>
  <span class="na">CI_TMPL_REF</span><span class="pi">:</span> <span class="nv">*ciTmplRef</span></code></pre></figure>
<p>Where we use a YAML anchor to ensure that we use the same reference when including and when assigning the value to the
<code>CI_TMPL_REF</code> variable (as far as I know we have to pass the <code>ref</code> value explicitly to know which reference was used
when including the file, the anchor allows us to make sure that the value is always the same in both places).</p>
<p>The reference we use is quite important for the <em>reproducibility</em> of the jobs, if we don’t use fixed <em>tags</em> or <em>commit
hashes</em> as references each time a job that downloads the files is executed we can get different versions of them.</p>
<p>For that reason is not a bad idea to create tags on our common repo and use them as reference on the projects or
branches that we want to behave as if their CI/CD configuration was local (if we point to a fixed version of the common
repo the way everything is going to work is almost the same as having the pipelines directly in our repo).</p>
<p>But while developing pipelines using branches as references is a really useful option; it allows us to re-run the jobs
that we want to test and they will download the latest versions of the asset files on the branch, speeding up the
testing process.</p>
<p>However keep in mind that the trick only works with the <em>asset</em> files, if we change a job or a pipeline on the YAML
files restarting the job is not enough to test the new version as the restart uses the same job created with the current
pipeline.</p>
<p>To try the updated jobs we have to create a new pipeline using a new action against the repository or executing the
pipeline manually.</p></section></section>
<section class="doc-section level-1"><h2 id="_conclusion">Conclusion</h2><p>For now I’m using the second solution and as it is working well my guess is that I’ll keep using that approach unless
giltab itself provides a better or simpler way of doing the same thing.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Testing cilium with k3d and kind</title>
      <link>https://blogops.mixinet.net/posts/testing_cilium_with_k3d_and_kind/</link>
      <pubDate>Tue, 18 Jul 2023 19:00:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/testing_cilium_with_k3d_and_kind/</guid>
      <description>&lt;p&gt;This post describes how to deploy &lt;a href=&#34;https://github.com/cilium/cilium&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;cilium&lt;/a&gt; (and
&lt;a href=&#34;https://github.com/cilium/hubble&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;hubble&lt;/a&gt;) using &lt;code&gt;docker&lt;/code&gt; on a Linux system with
&lt;a href=&#34;https://k3d.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;k3d&lt;/a&gt; or &lt;a href=&#34;https://kind.sigs.k8s.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;kind&lt;/a&gt; to test it as
&lt;a href=&#34;https://www.cni.dev/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;CNI&lt;/a&gt; and
&lt;a href=&#34;https://en.wikipedia.org/wiki/Service_mesh&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Service Mesh&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I wrote some scripts to do a local installation and evaluate &lt;code&gt;cilium&lt;/code&gt; to use it
at &lt;a href=&#34;https://intelygenz.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;work&lt;/a&gt; (in fact we are using &lt;code&gt;cilium&lt;/code&gt; on an EKS
cluster now), but I thought it would be a good idea to share my original
scripts in this blog just in case they are useful to somebody, at least for
playing a little with the technology.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_links&#34;&gt;Links&lt;/h2&gt;&lt;p&gt;As there is no point on explaining here all the concepts related to &lt;code&gt;cilium&lt;/code&gt;
I’m providing some links for the reader interested on reading about it:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;What is &lt;a href=&#34;https://www.cni.dev/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;CNI&lt;/a&gt;?&lt;/li&gt;&lt;li&gt;What is
&lt;a href=&#34;https://docs.cilium.io/en/stable/overview/intro/#what-is-cilium&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Cilium&lt;/a&gt;?&lt;/li&gt;&lt;li&gt;What is &lt;a href=&#34;https://ebpf.io/what-is-ebpf/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;eBPF&lt;/a&gt;?&lt;/li&gt;&lt;li&gt;What is
&lt;a href=&#34;https://docs.cilium.io/en/stable/overview/intro/#what-is-hubble&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Hubble&lt;/a&gt;?&lt;/li&gt;&lt;li&gt;Why use &lt;a href=&#34;https://blog.palark.com/why-cilium-for-kubernetes-networking/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Cilium
with Kubernetes&lt;/a&gt;?&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>This post describes how to deploy <a href="https://github.com/cilium/cilium" target="_blank" rel="noopener">cilium</a> (and
<a href="https://github.com/cilium/hubble" target="_blank" rel="noopener">hubble</a>) using <code>docker</code> on a Linux system with
<a href="https://k3d.io/" target="_blank" rel="noopener">k3d</a> or <a href="https://kind.sigs.k8s.io/" target="_blank" rel="noopener">kind</a> to test it as
<a href="https://www.cni.dev/" target="_blank" rel="noopener">CNI</a> and
<a href="https://en.wikipedia.org/wiki/Service_mesh" target="_blank" rel="noopener">Service Mesh</a>.</p>
<p>I wrote some scripts to do a local installation and evaluate <code>cilium</code> to use it
at <a href="https://intelygenz.com/" target="_blank" rel="noopener">work</a> (in fact we are using <code>cilium</code> on an EKS
cluster now), but I thought it would be a good idea to share my original
scripts in this blog just in case they are useful to somebody, at least for
playing a little with the technology.</p>
<section class="doc-section level-1"><h2 id="_links">Links</h2><p>As there is no point on explaining here all the concepts related to <code>cilium</code>
I’m providing some links for the reader interested on reading about it:</p>
<div class="ulist"><ul><li>What is <a href="https://www.cni.dev/" target="_blank" rel="noopener">CNI</a>?</li><li>What is
<a href="https://docs.cilium.io/en/stable/overview/intro/#what-is-cilium" target="_blank" rel="noopener">Cilium</a>?</li><li>What is <a href="https://ebpf.io/what-is-ebpf/" target="_blank" rel="noopener">eBPF</a>?</li><li>What is
<a href="https://docs.cilium.io/en/stable/overview/intro/#what-is-hubble" target="_blank" rel="noopener">Hubble</a>?</li><li>Why use <a href="https://blog.palark.com/why-cilium-for-kubernetes-networking/" target="_blank" rel="noopener">Cilium
with Kubernetes</a>?</li></ul></div>
<p>All the scripts and configuration files discussed in this post are available on
my <a href="https://forgejo.mixinet.net/blogops/cilium-docker/" target="_blank" rel="noopener">cilium-docker</a> git repository.</p></section>
<section class="doc-section level-1"><h2 id="_installation">Installation</h2><p>For each platform we are going to deploy two clusters on the same docker
network; I’ve chosen this model because it allows the containers to see the
addresses managed by <a href="https://metallb.org" target="_blank" rel="noopener">metallb</a> from both clusters (the idea
is to use those addresses for load balancers and treat them as if they were
public).</p>
<p>The installation(s) use <code>cilium</code> as CNI, <code>metallb</code> for BGP (I tested the
<code>cilium</code> options, but I wasn’t able to configure them right) and <code>nginx</code> as the
ingress controller (again, I tried to use <code>cilium</code> but something didn’t work
either).</p>
<p>To be able to use the previous components some default options have been
disabled on <code>k3d</code> and <code>kind</code> and, in the case of <code>k3d</code>, a lot of <code>k3s</code> options
(<code>traefik</code>, <code>servicelb</code>, <code>kubeproxy</code>, <code>network-policy</code>, …​) have also been
disabled to avoid conflicts.</p>
<p>To use the scripts we need to install <code>cilium</code>, <code>docker</code>, <code>helm</code>, <code>hubble</code>,
<code>k3d</code>, <code>kind</code>, <code>kubectl</code> and <code>tmpl</code> in our system.</p>
<p>After cloning the repository, the <code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//sbin/tools.sh" target="_blank" rel="noopener">sbin/tools.sh</a></code>
script can be used to do that on a <code>linux-amd64</code> system:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>git clone https://forgejo.mixinet.net/blogops/cilium-docker.git
<span class="gp">$</span><span class="w"> </span><span class="nb">cd </span>cilium-docker
<span class="gp">$</span><span class="w"> </span>./sbin/tools.sh apps</code></pre></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>For the rest of the post we will assume that our working directory is the root
of the repository checkout.</p></aside>
<p>Once we have the tools, to install everything on <code>k3d</code> (for <code>kind</code> replace
<code>k3d</code> by <code>kind</code>) we can use the
<code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//sbin/cilium-install.sh" target="_blank" rel="noopener">sbin/cilium-install.sh</a></code> script as
follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="c"># Deploy first k3d cluster with cilium &amp; cluster-mesh</span>
<span class="gp">$</span><span class="w"> </span>./sbin/cilium-install.sh k3d 1 full
<span class="go">[...]
</span><span class="gp">$</span><span class="w"> </span><span class="c"># Deploy second k3d cluster with cilium &amp; cluster-mesh</span>
<span class="gp">$</span><span class="w"> </span>./sbin/cilium-install.sh k3d 2 full
<span class="go">[...]
</span><span class="gp">$</span><span class="w"> </span><span class="c"># The 2nd cluster-mesh installation connects the clusters</span></code></pre></div>
<p>If we run the command <code>cilium status</code> after the installation we should get an
output similar to the one seen on the following screenshot:</p>
<div class="image-block"><img src="/images/testing_cilium_with_k3d_and_kind/cilium_status.png" alt="cilium status"/></div>
<p>The installation script uses the following templates:</p>
<div class="ulist"><ul><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//tmpl/k3d-config.yaml" target="_blank" rel="noopener">tmpl/k3d-config.yaml</a></code>: configuration to
deploy the <code>k3d</code> cluster.</li><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//tmpl/kind-config.yaml" target="_blank" rel="noopener">tmpl/kind-config.yaml</a></code>: configuration to
deploy the <code>kind</code> cluster.</li><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//tmpl/metallb-crds.yaml" target="_blank" rel="noopener">tmpl/metallb-crds.yaml</a></code> and
<code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//tmpl/ippols.yaml" target="_blank" rel="noopener">tmpl/ippols.yaml</a></code>: configurations for the
<code>metallb</code> deployment.</li><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//tmpl/cilium.yaml" target="_blank" rel="noopener">tmpl/cilium.yaml</a></code>: values to deploy the
cilium using the helm chart.</li></ul></div>
<p>Once we have finished our tests we can remove the installation using the
<code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//sbin/cilium-remove.sh" target="_blank" rel="noopener">sbin/cilium-remove.sh</a></code> script.</p>
<section class="doc-section level-2"><h3 id="_some_notes_about_the_configuration">Some notes about the configuration</h3><div class="ulist"><ul><li><p>As noted on the documentation, the <code>cilium</code> deployment needs to mount the
<code>bpffs</code> on <code>/sys/fs/bpf</code> and <code>cgroupv2</code> on <code>/run/cilium/cgroupv2</code>; that is
done automatically on <code>kind</code>, but fails on <code>k3d</code> because the image does not
include <code>bash</code> (see this <a href="https://github.com/k3d-io/k3d/issues/363">issue</a>).</p><p>To fix it we mount a script on all the <code>k3d</code> containers that is executed each
time they are started (the script is mounted as <code>/bin/k3d-entrypoint-cilium.sh</code>
because the <code>/bin/k3d-entrypoint.sh</code> script executes the scripts that follow
the pattern <code>/bin/k3d-entrypoint-*.sh</code> before launching the <code>k3s</code> daemon).</p>
<p>The source code of the script is available
<a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//bin/k3d-entrypoint-cilium.sh" target="_blank" rel="noopener">here</a>.</p></li><li><p>When testing the multi-cluster deployment with <code>k3d</code> we have found issues
with open files, looks like they are related to <code>inotify</code> (see
<a href="https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files">this
page on the kind documentation</a>); adding the following to the
<code>/etc/sysctl.conf</code> file fixed the issue:</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="ini"><span class="c"># fix inotify issues with docker &amp; k3d
</span><span class="py">fs.inotify.max_user_watches</span> <span class="p">=</span> <span class="s">524288</span>
<span class="py">fs.inotify.max_user_instances</span> <span class="p">=</span> <span class="s">512</span></code></pre></div></li><li>Although the deployment theoretically supports it, we are not using <code>cilium</code>
as the cluster <code>ingress</code> yet (it did not work, so it is no longer enabled)
and we are also ignoring the <code>gateway-api</code> for now.</li><li><p>The documentation uses the <code>cilium</code> cli to do all the installations, but I
noticed that following that route the current version does not work right with
<code>hubble</code> (it messes up the TLS support, there are some notes about the
problems on this <a href="https://github.com/cilium/cilium/issues/20130">cilium
issue</a>), so we are deploying with <code>helm</code> right now.</p><p>The problem with the <code>helm</code> approach is that there is no official documentation
on how to install the <code>cluster mesh</code> with it (there is a request for
documentation <a href="https://github.com/cilium/cilium/issues/19057">here</a>), so we are
using the <code>cilium</code> cli for now and it looks that it does not break the <code>hubble</code>
configuration.</p></li></ul></div></section></section>
<section class="doc-section level-1"><h2 id="_tests">Tests</h2><p>To test <code>cilium</code> we have used some scripts &amp; additional config files that are
available on the <a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//test" target="_blank" rel="noopener">test</a> sub directory of the repository:</p>
<div class="ulist"><ul><li><p><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//test/cilium-connectivity.sh" target="_blank" rel="noopener">cilium-connectivity.sh</a></code>: a script
that runs the <code>cilium connectivity</code> test for one cluster or in multi cluster
mode (for mesh testing).</p><p>If we export the variable <code>HUBBLE_PF=true</code> the script executes the command
<code>cilium hubble port-forward</code> before launching the tests.</p></li><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//test/http-sw.sh" target="_blank" rel="noopener">http-sw.sh</a></code>: Simple tests for cilium policies
from the <a href="https://docs.cilium.io/en/stable/gettingstarted/demo/" target="_blank" rel="noopener">cilium demo</a>;
the script deploys the Star Wars demo application and allows us to add the
L3/L4 policy or the L3/L4/L7 policy, test the connectivity and view the
policies.</li><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//test/ingress-basic.sh" target="_blank" rel="noopener">ingress-basic.sh</a></code>: This test is for
checking the ingress controller, it is prepared to work against <code>cilium</code> and
<code>nginx</code>, but as explained before the use of <code>cilium</code> as an ingress controller
is not working as expected, so the idea is to call it with <code>nginx</code> always as
the first argument for now.</li><li><code><a href="https://forgejo.mixinet.net/blogops/cilium-docker/src/branch/main//test/mesh-test.sh" target="_blank" rel="noopener">mesh-test.sh</a></code>: Tool to deploy a global
service on two clusters, change the service affinity to <code>local</code> or <code>remote</code>,
enable or disable if the service is <em>shared</em> and test how the tools respond.</li></ul></div>
<section class="doc-section level-2"><h3 id="_running_the_tests">Running the tests</h3><p>The <code>cilium-connectivity.sh</code> executes the standard <code>cilium</code> tests:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>./test/cilium-connectivity.sh k3d 12
<span class="go">ℹ️  Monitor aggregation detected, will skip some flow validation
steps
✨ [k3d-cilium1] Creating namespace cilium-test for connectivity
check...
✨ [k3d-cilium2] Creating namespace cilium-test for connectivity
check...
[...]

✅ All 33 tests (248 actions) successful, 2 tests skipped,
0 scenarios skipped.</span></code></pre></div>
<p>To test how the cilium policies work use the <code>http-sw.sh</code> script:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">kubectx k3d-cilium2 #</span><span class="w"> </span><span class="o">(</span>just <span class="k">in case</span><span class="p">)</span>
<span class="gp">#</span><span class="w"> </span>Create <span class="nb">test </span>namespace and services
<span class="go">./test/http-sw.sh create
</span><span class="gp">#</span><span class="w"> </span>Test without policies <span class="o">(</span>exaust-port fails by design<span class="o">)</span>
<span class="go">./test/http-sw.sh test
</span><span class="gp">#</span><span class="w"> </span>Create and view L3/L4 CiliumNetworkPolicy
<span class="go">./test/http-sw.sh policy-l34
</span><span class="gp">#</span><span class="w"> </span>Test policy <span class="o">(</span>no access from xwing, exaust-port fails<span class="o">)</span>
<span class="go">./test/http-sw.sh test
</span><span class="gp">#</span><span class="w"> </span>Create and view L7 CiliumNetworkPolicy
<span class="go">./test/http-sw.sh policy-l7
</span><span class="gp">#</span><span class="w"> </span>Test policy <span class="o">(</span>no access from xwing, exaust-port returns 403<span class="o">)</span>
<span class="go">./test/http-sw.sh test
</span><span class="gp">#</span><span class="w"> </span>Delete http-sw <span class="nb">test</span>
<span class="go">./test/http-sw.sh delete</span></code></pre></div>
<p>And to see how the <em>service mesh</em> works use the <code>mesh-test.sh</code> script:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">#</span><span class="w"> </span>Create services on both clusters and <span class="nb">test</span>
<span class="go">./test/mesh-test.sh k3d create
./test/mesh-test.sh k3d test
</span><span class="gp">#</span><span class="w"> </span>Disable service sharing from cluster 1 and <span class="nb">test</span>
<span class="go">./test/mesh-test.sh k3d svc-shared-false
./test/mesh-test.sh k3d test
</span><span class="gp">#</span><span class="w"> </span>Restore sharing, <span class="nb">set local </span>affinity and <span class="nb">test</span>
<span class="go">./test/mesh-test.sh k3d svc-shared-default
./test/mesh-test.sh k3d svc-affinity-local
./test/mesh-test.sh k3d test
</span><span class="gp">#</span><span class="w"> </span>Delete deployment from cluster 1 and <span class="nb">test</span>
<span class="go">./test/mesh-test.sh k3d delete-deployment
./test/mesh-test.sh k3d test
</span><span class="gp">#</span><span class="w"> </span>Delete <span class="nb">test</span>
<span class="go">./test/mesh-test.sh k3d delete</span></code></pre></div></section></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Shared networking for Virtual Machines and Containers</title>
      <link>https://blogops.mixinet.net/posts/shared_networking_for_vms_and_containers/</link>
      <pubDate>Sun, 09 Oct 2022 11:45:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/shared_networking_for_vms_and_containers/</guid>
      <description>&lt;p&gt;This entry explains how I have configured a linux bridge, &lt;code&gt;dnsmasq&lt;/code&gt; and
&lt;code&gt;iptables&lt;/code&gt; to be able to run and communicate different virtualization systems
and containers on laptops running &lt;a href=&#34;https://www.debian.org&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Debian GNU/Linux&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I’ve used different variations of this setup for a long time with
&lt;a href=&#34;https://www.virtualbox.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;VirtualBox&lt;/a&gt; and &lt;a href=&#34;https://www.linux-kvm.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;KVM&lt;/a&gt; for
the &lt;strong&gt;Virtual Machines&lt;/strong&gt; and &lt;a href=&#34;http://linux-vserver.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Linux-VServer&lt;/a&gt;,
&lt;a href=&#34;https://openvz.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;OpenVZ&lt;/a&gt;, &lt;a href=&#34;https://linuxcontainers.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;LXC&lt;/a&gt; and lately
&lt;a href=&#34;https://www.docker.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Docker&lt;/a&gt; or &lt;a href=&#34;https://podman.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Podman&lt;/a&gt; for the
&lt;strong&gt;Containers&lt;/strong&gt;.&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_required_packages&#34;&gt;Required packages&lt;/h2&gt;&lt;p&gt;I’m running Debian Sid with &lt;code&gt;systemd&lt;/code&gt; and &lt;code&gt;network-manager&lt;/code&gt; to configure the
WiFi and Ethernet interfaces, but for the bridge I use &lt;code&gt;bridge-utils&lt;/code&gt; with
&lt;code&gt;ifupdown&lt;/code&gt; (as I said this setup is old, I guess &lt;code&gt;ifupdow2&lt;/code&gt; and &lt;code&gt;ifupdown-ng&lt;/code&gt;
will work too).&lt;/p&gt;
&lt;p&gt;To start and stop the &lt;code&gt;DNS&lt;/code&gt; and &lt;code&gt;DHCP&lt;/code&gt; services and add NAT rules when the
bridge is brought up or down I execute a script that uses:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;&lt;code&gt;ip&lt;/code&gt; from &lt;code&gt;iproute2&lt;/code&gt; to get the network information,&lt;/li&gt;&lt;li&gt;&lt;code&gt;dnsmasq&lt;/code&gt; to provide the &lt;code&gt;DNS&lt;/code&gt; and &lt;code&gt;DHCP&lt;/code&gt; services (currently only the
&lt;code&gt;dnsmasq-base&lt;/code&gt; package is needed and it is recommended by &lt;code&gt;network-manager&lt;/code&gt;,
so it is probably installed),&lt;/li&gt;&lt;li&gt;&lt;code&gt;iptables&lt;/code&gt; to configure NAT (for now &lt;code&gt;docker&lt;/code&gt; kind of forces me to keep using
&lt;code&gt;iptables&lt;/code&gt;, but at some point I’d like to move to &lt;code&gt;nftables&lt;/code&gt;).&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>This entry explains how I have configured a linux bridge, <code>dnsmasq</code> and
<code>iptables</code> to be able to run and communicate different virtualization systems
and containers on laptops running <a href="https://www.debian.org" target="_blank" rel="noopener">Debian GNU/Linux</a>.</p>
<p>I’ve used different variations of this setup for a long time with
<a href="https://www.virtualbox.org/" target="_blank" rel="noopener">VirtualBox</a> and <a href="https://www.linux-kvm.org/" target="_blank" rel="noopener">KVM</a> for
the <strong>Virtual Machines</strong> and <a href="http://linux-vserver.org/" target="_blank" rel="noopener">Linux-VServer</a>,
<a href="https://openvz.org/" target="_blank" rel="noopener">OpenVZ</a>, <a href="https://linuxcontainers.org/" target="_blank" rel="noopener">LXC</a> and lately
<a href="https://www.docker.com/" target="_blank" rel="noopener">Docker</a> or <a href="https://podman.io/" target="_blank" rel="noopener">Podman</a> for the
<strong>Containers</strong>.</p>
<section class="doc-section level-1"><h2 id="_required_packages">Required packages</h2><p>I’m running Debian Sid with <code>systemd</code> and <code>network-manager</code> to configure the
WiFi and Ethernet interfaces, but for the bridge I use <code>bridge-utils</code> with
<code>ifupdown</code> (as I said this setup is old, I guess <code>ifupdow2</code> and <code>ifupdown-ng</code>
will work too).</p>
<p>To start and stop the <code>DNS</code> and <code>DHCP</code> services and add NAT rules when the
bridge is brought up or down I execute a script that uses:</p>
<div class="ulist"><ul><li><code>ip</code> from <code>iproute2</code> to get the network information,</li><li><code>dnsmasq</code> to provide the <code>DNS</code> and <code>DHCP</code> services (currently only the
<code>dnsmasq-base</code> package is needed and it is recommended by <code>network-manager</code>,
so it is probably installed),</li><li><code>iptables</code> to configure NAT (for now <code>docker</code> kind of forces me to keep using
<code>iptables</code>, but at some point I’d like to move to <code>nftables</code>).</li></ul></div>
<p>To make sure you have everything installed you can run the following command:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="go">sudo apt install bridge-utils dnsmasq-base ifupdown iproute2 iptables</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_bridge_configuration">Bridge configuration</h2><p>The bridge configuration for <code>ifupdow</code> is available on the file
<code>/etc/network/interfaces.d/vmbr0</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code># Virtual servers NAT Bridge
auto vmbr0
iface vmbr0 inet static
    address         10.0.4.1
    network         10.0.4.0
    netmask         255.255.255.0
    broadcast       10.0.4.255
    bridge_ports    none
    bridge_maxwait  0
    up              /usr/local/sbin/vmbridge ${IFACE} start nat
    pre-down        /usr/local/sbin/vmbridge ${IFACE} stop nat</code></pre></div>
<section class="admonition-block warning" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Warning: </span></h6><p>To use a separate file with <code>ifupdown</code> make sure that <code>/etc/network/interfaces</code>
contains the line:</p>
<div class="listing-block"><pre class="rouge highlight"><code>source /etc/network/interfaces.d/*</code></pre></div>
<p>or add its contents to <code>/etc/network/interfaces</code> directly, if you prefer.</p></section>
<p>This configuration creates a bridge with the address <code>10.0.4.1</code> and
assumes that the machines connected to it will use the <code>10.0.4.0/24</code> network;
you can change the network address if you want, as long as you use a private
range and it does not collide with networks used in your Virtual Machines all
should be OK.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>All my configurations are using IPv4 for now, I was planning to move some
things to IPv6 not so long ago, but the truth is that I haven’t had the need
nor the time and the Spanish Internet providers are not helping either.</p></aside>
<p>The <code>vmbridge</code> script is used to start the <code>dnsmasq</code> server and setup the NAT
rules when the interface is brought up and remove the firewall rules and stop
the <code>dnsmasq</code> server when it is brought down.</p></section>
<section class="doc-section level-1"><h2 id="_the_vmbridge_script">The <code>vmbridge</code> script</h2><p>The <code>vmbridge</code> script launches an instance of <code>dnsmasq</code>	that binds to the
<em>bridge</em> interface (<code>vmbr0</code> in our case) that is used as DNS and DHCP server.</p>
<p>The DNS server reads the <code>/etc/hosts</code> file to publish local DNS names and
forwards all the other requests to the the <code>dnsmasq</code> server launched by
<code>NetworkManager</code> that is listening on the loopback interface.</p>
<p>As this server already does catching we disable it for our server, with the
added advantage that, if we change networks, new requests go to the new
resolvers because the DNS server handled by <code>NetworkManager</code> gets restarted and
flushes its cache (this is useful if we connect to a new network that has
internal DNS servers that are configured to do split DNS for internal services;
if we use this model all requests get the internal address as soon as the DNS
server is queried again).</p>
<p>The DHCP server is configured to provide IPs to unknown hosts for a sub range
of the addresses on the bridge network and use fixed IPs if the <code>/etc/ethers</code>
file has a MAC with a matching hostname on the <code>/etc/hosts</code> file.</p>
<p>To make things work with old DHCP clients the script also adds <em>checksums</em> to
the DHCP packets using <code>iptables</code> (when the interface is not linked to a
physical device the kernel does not add checksums, but we can fix it adding a
rule on the <code>mangle</code> table).</p>
<p>If we want external connectivity we can pass the <code>nat</code> argument and then the
script creates a <code>MASQUERADE</code> rule for the bridge network and enables IP
forwarding.</p>
<p>The script source code is the following:</p>
<figure class="listing-block"><figcaption>/usr/local/sbin/vmbridge</figcaption>
<pre class="rouge highlight"><code data-lang="sh"><span class="c">#!/bin/sh</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>
<span class="nv">LOCAL_DOMAIN</span><span class="o">=</span><span class="s2">&#34;vmnet&#34;</span>
<span class="nv">MIN_IP_LEASE</span><span class="o">=</span><span class="s2">&#34;192&#34;</span>
<span class="nv">MAX_IP_LEASE</span><span class="o">=</span><span class="s2">&#34;223&#34;</span>
<span class="c"># ---------</span>
<span class="c"># FUNCTIONS</span>
<span class="c"># ---------</span>
get_net<span class="o">()</span> <span class="o">{</span>
  <span class="nv">NET</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>
    ip a <span class="nb">ls</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">&#34;</span> 2&gt;/dev/null | <span class="nb">sed</span> <span class="nt">-ne</span> <span class="s1">&#39;s/^.*inet \(.*\) brd.*$/\1/p&#39;</span>
  <span class="si">)</span><span class="s2">&#34;</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$NET</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="k">return </span>1
<span class="o">}</span>
checksum_fix_start<span class="o">()</span> <span class="o">{</span>
  iptables <span class="nt">-t</span> mangle <span class="nt">-A</span> POSTROUTING <span class="nt">-o</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">&#34;</span> <span class="nt">-p</span> udp <span class="nt">--dport</span> 68 <span class="se">\</span>
    <span class="nt">-j</span> CHECKSUM <span class="nt">--checksum-fill</span> 2&gt;/dev/null <span class="o">||</span> <span class="nb">true</span>
<span class="o">}</span>
checksum_fix_stop<span class="o">()</span> <span class="o">{</span>
  iptables <span class="nt">-t</span> mangle <span class="nt">-D</span> POSTROUTING <span class="nt">-o</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">&#34;</span> <span class="nt">-p</span> udp <span class="nt">--dport</span> 68 <span class="se">\</span>
    <span class="nt">-j</span> CHECKSUM <span class="nt">--checksum-fill</span> 2&gt;/dev/null <span class="o">||</span> <span class="nb">true</span>
<span class="o">}</span>
nat_start<span class="o">()</span> <span class="o">{</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$NAT</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;yes&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="k">return </span>0
  <span class="c"># Configure NAT</span>
  iptables <span class="nt">-t</span> nat <span class="nt">-A</span> POSTROUTING <span class="nt">-s</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">NET</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">NET</span><span class="k">}</span><span class="s2">&#34;</span> <span class="nt">-j</span> MASQUERADE
  <span class="c"># Enable forwarding (just in case)</span>
  <span class="nb">echo </span>1 <span class="o">&gt;</span>/proc/sys/net/ipv4/ip_forward
<span class="o">}</span>
nat_stop<span class="o">()</span> <span class="o">{</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$NAT</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;yes&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="k">return </span>0
  iptables <span class="nt">-t</span> nat <span class="nt">-D</span> POSTROUTING <span class="nt">-s</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">NET</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">NET</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">-j</span> MASQUERADE 2&gt;/dev/null <span class="o">||</span> <span class="nb">true</span>
<span class="o">}</span>
do_start<span class="o">()</span> <span class="o">{</span>
  <span class="c"># Bridge address</span>
  <span class="nv">_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">NET</span><span class="p">%%/*</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="c"># DNS leases (between .MIN_IP_LEASE and .MAX_IP_LEASE)</span>
  <span class="nv">_dhcp_range</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">_addr</span><span class="p">%.*</span><span class="k">}</span><span class="s2">.</span><span class="k">${</span><span class="nv">MIN_IP_LEASE</span><span class="k">}</span><span class="s2">,</span><span class="k">${</span><span class="nv">_addr</span><span class="p">%.*</span><span class="k">}</span><span class="s2">.</span><span class="k">${</span><span class="nv">MAX_IP_LEASE</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="c"># Bridge mtu</span>
  <span class="nv">_mtu</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>
    ip <span class="nb">link </span>show dev <span class="s2">&#34;</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">&#34;</span> |
      <span class="nb">sed</span> <span class="nt">-n</span> <span class="nt">-e</span> <span class="s1">&#39;/mtu/ { s/^.*mtu \([0-9]\+\).*$/\1/p }&#39;</span>
  <span class="si">)</span><span class="s2">&#34;</span>
  <span class="c"># Compute extra dnsmasq options</span>
  <span class="nv">dnsmasq_extra_opts</span><span class="o">=</span><span class="s2">&#34;&#34;</span>
  <span class="c"># Disable gateway when not using NAT</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$NAT</span><span class="s2">&#34;</span> <span class="o">!=</span> <span class="s2">&#34;yes&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">dnsmasq_extra_opts</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$dnsmasq_extra_opts</span><span class="s2"> --dhcp-option=3&#34;</span>
  <span class="k">fi</span>
  <span class="c"># Adjust MTU size if needed</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">&#34;</span><span class="nv">$_mtu</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$_mtu</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;1500&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">dnsmasq_extra_opts</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$dnsmasq_extra_opts</span><span class="s2"> --dhcp-option=26,</span><span class="nv">$_mtu</span><span class="s2">&#34;</span>
  <span class="k">fi</span>
  <span class="c"># shellcheck disable=SC2086</span>
  dnsmasq <span class="nt">--bind-interfaces</span> <span class="se">\</span>
    <span class="nt">--cache-size</span><span class="o">=</span><span class="s2">&#34;0&#34;</span> <span class="se">\</span>
    <span class="nt">--conf-file</span><span class="o">=</span><span class="s2">&#34;/dev/null&#34;</span> <span class="se">\</span>
    <span class="nt">--dhcp-authoritative</span> <span class="se">\</span>
    <span class="nt">--dhcp-leasefile</span><span class="o">=</span><span class="s2">&#34;/var/lib/misc/dnsmasq.</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">.leases&#34;</span> <span class="se">\</span>
    <span class="nt">--dhcp-no-override</span> <span class="se">\</span>
    <span class="nt">--dhcp-range</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">_dhcp_range</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">--domain</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">LOCAL_DOMAIN</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">--except-interface</span><span class="o">=</span><span class="s2">&#34;lo&#34;</span> <span class="se">\</span>
    <span class="nt">--expand-hosts</span> <span class="se">\</span>
    <span class="nt">--interface</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">--listen-address</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">_addr</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">--no-resolv</span> <span class="se">\</span>
    <span class="nt">--pid-file</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="nt">--read-ethers</span> <span class="se">\</span>
    <span class="nt">--server</span><span class="o">=</span><span class="s2">&#34;127.0.0.1&#34;</span> <span class="se">\</span>
    <span class="nv">$dnsmasq_extra_opts</span>
  checksum_fix_start
  nat_start
<span class="o">}</span>
do_stop<span class="o">()</span> <span class="o">{</span>
  nat_stop
  checksum_fix_stop
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">kill</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">cat</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true
    rm</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span>
do_status<span class="o">()</span> <span class="o">{</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">kill</span> <span class="nt">-HUP</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">cat</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;dnsmasq RUNNING&#34;</span>
  <span class="k">else
    </span><span class="nb">echo</span> <span class="s2">&#34;dnsmasq NOT running&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span>
do_reload<span class="o">()</span> <span class="o">{</span>
  <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">kill</span> <span class="nt">-HUP</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">cat</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">PIDF</span><span class="k">}</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="o">}</span>
usage<span class="o">()</span> <span class="o">{</span>
  <span class="nb">echo</span> <span class="s2">&#34;Uso: </span><span class="nv">$0</span><span class="s2"> BRIDGE (start|stop [nat])|status|reload&#34;</span>
  <span class="nb">exit </span>1
<span class="o">}</span>
<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>
<span class="o">[</span> <span class="s2">&#34;$#&#34;</span> <span class="nt">-ge</span> <span class="s2">&#34;2&#34;</span> <span class="o">]</span> <span class="o">||</span> usage
<span class="nv">BRIDGE</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
<span class="nv">OPTION</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$2</span><span class="s2">&#34;</span>
<span class="nb">shift </span>2
<span class="nv">NAT</span><span class="o">=</span><span class="s2">&#34;no&#34;</span>
<span class="k">for </span>arg <span class="k">in</span> <span class="s2">&#34;</span><span class="nv">$@</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">do
  case</span> <span class="s2">&#34;</span><span class="nv">$arg</span><span class="s2">&#34;</span> <span class="k">in
  </span>nat<span class="p">)</span> <span class="nv">NAT</span><span class="o">=</span><span class="s2">&#34;yes&#34;</span> <span class="p">;;</span>
  <span class="k">*</span><span class="p">)</span> <span class="nb">echo</span> <span class="s2">&#34;Unknown arg &#39;</span><span class="nv">$arg</span><span class="s2">&#39;&#34;</span> <span class="o">&amp;&amp;</span> <span class="nb">exit </span>1 <span class="p">;;</span>
  <span class="k">esac</span>
<span class="k">done
</span><span class="nv">PIDF</span><span class="o">=</span><span class="s2">&#34;/var/run/vmbridge-</span><span class="k">${</span><span class="nv">BRIDGE</span><span class="k">}</span><span class="s2">-dnsmasq.pid&#34;</span>
<span class="k">case</span> <span class="s2">&#34;</span><span class="nv">$OPTION</span><span class="s2">&#34;</span> <span class="k">in
</span>start<span class="p">)</span> get_net <span class="o">&amp;&amp;</span> do_start <span class="p">;;</span>
stop<span class="p">)</span> get_net <span class="o">&amp;&amp;</span> do_stop <span class="p">;;</span>
status<span class="p">)</span> do_status <span class="p">;;</span>
reload<span class="p">)</span> get_net <span class="o">&amp;&amp;</span> do_reload <span class="p">;;</span>
<span class="k">*</span><span class="p">)</span> <span class="nb">echo</span> <span class="s2">&#34;Unknown command &#39;</span><span class="nv">$OPTION</span><span class="s2">&#39;&#34;</span> <span class="o">&amp;&amp;</span> <span class="nb">exit </span>1 <span class="p">;;</span>
<span class="k">esac</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure></section>
<section class="doc-section level-1"><h2 id="_networkmanager_configuration">NetworkManager Configuration</h2><p>The default <code>/etc/NetworkManager/NetworkManager.conf</code> file has the following
contents:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="ini"><span class="nn">[main]</span>
<span class="py">plugins</span><span class="p">=</span><span class="s">ifupdown,keyfile</span>

<span class="nn">[ifupdown]</span>
<span class="py">managed</span><span class="p">=</span><span class="s">false</span></code></pre></div>
<p>Which means that it will leave interfaces managed by <code>ifupdown</code> alone and, by
default, will send the connection DNS configuration to <code>systemd-resolved</code> if it
is installed.</p>
<p>As we want to use <code>dnsmasq</code> for DNS resolution, but we don’t want
<code>NetworkManager</code> to modify our <code>/etc/resolv.conf</code> we are going to add the
following file (<code>/etc/NetworkManager/conf.d/dnsmasq.conf</code>) to our system:</p>
<figure class="listing-block"><figcaption>/etc/NetworkManager/conf.d/dnsmasq.conf</figcaption>
<pre class="rouge highlight"><code data-lang="ini"><span class="nn">[main]</span>
<span class="py">dns</span><span class="p">=</span><span class="s">dnsmasq</span>
<span class="py">rc-manager</span><span class="p">=</span><span class="s">unmanaged</span></code></pre></figure>
<p>and restart the <code>NetworkManager</code> service:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">sudo </span>systemctl restart NetworkManager.service</code></pre></div>
<p>From now on the <code>NetworkManager</code> will start a <code>dnsmasq</code> service that queries
the servers provided by the DHCP servers we connect to on <code>127.0.0.1:53</code> but
will not touch our <code>/etc/resolv.conf</code> file.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>We are going to use the <code>dnsmasq</code> service managed by <code>NetworkManager</code> because
it is updated automatically, consumes very little memory and avoids the need of
extra tricks to make the server that we will be using for DNS &amp; DHCP notice
external DNS configuration changes.</p></aside></section>
<section class="doc-section level-1"><h2 id="_configuring_systemd_resolved">Configuring <code>systemd-resolved</code></h2><p>If we start using our own name server but our system has <code>systemd-resolved</code>
installed we will no longer need or use the DNS stub; programs using it will
use our <code>dnsmasq</code> server directly now, but we keep running <code>systemd-resolved</code>
for the host programs that use its native api or access it through
<code>/etc/nsswitch.conf</code> (when <code>libnss-resolve</code> is installed).</p>
<p>To disable the stub we add a <code>/etc/systemd/resolved.conf.d/disable-stub.conf</code>
file to our machine with the following content:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="ini"><span class="c"># Disable the DNS Stub Listener, we use our own dnsmasq
</span><span class="nn">[Resolve]</span>
<span class="py">DNSStubListener</span><span class="p">=</span><span class="s">no</span></code></pre></div>
<p>and restart the <code>systemd-resolved</code> to make sure that the stub is stopped:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">sudo </span>systemctl restart systemd-resolved.service</code></pre></div></section>
<section class="doc-section level-1"><h2 id="_adjusting_etcresolv_conf">Adjusting <code>/etc/resolv.conf</code></h2><p>First we remove the existing <code>/etc/resolv.conf</code> file (it does not matter if it
is a link or a regular file) and then create a new one that contains at least
the following line (we can add a <code>search</code> line if is useful for us):</p>
<div class="listing-block"><pre class="rouge highlight"><code>nameserver 10.0.4.1</code></pre></div>
<p>From now on we will be using the <code>dnsmasq</code> server launched when we bring up the
<code>vmbr0</code> for multiple systems:</p>
<div class="ulist"><ul><li>as our main DNS server from the host (if we use the standard
<code>/etc/nsswitch.conf</code> and <code>libnss-resolve</code> is installed it is queried first,
but the <code>systemd-resolved</code> uses it as forwarder by default if needed),</li><li>as the DNS server of the Virtual Machines or containers that use <code>DHCP</code> for
network configuration and attach their virtual interfaces to our bridge,</li><li>as the DNS server of docker containers that get the DNS information from
<code>/etc/resolv.conf</code> (if we have entries that use loopback addresses the
containers that don’t use the host network tend to fail, as those addresses
inside the running containers are not linked to the loopback device of the
host).</li></ul></div></section>
<section class="doc-section level-1"><h2 id="_testing">Testing</h2><p>After all the configuration files and scripts are in place we just need to
bring up the bridge interface and check that everything works:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="c"># Bring interface up</span>
<span class="gp">$</span><span class="w"> </span><span class="nb">sudo </span>ifup vmbr0
<span class="gp">$</span><span class="w"> </span><span class="c"># Check that it is available</span>
<span class="gp">$</span><span class="w"> </span>ip a <span class="nb">ls </span>dev vmbr0
<span class="gp">4: vmbr0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt;</span><span class="w"> </span>mtu 1500 qdisc noqueue state DOWN
<span class="go">          group default qlen 1000
    link/ether 0a:b8:ef:b8:07:6c brd ff:ff:ff:ff:ff:ff
    inet 10.0.4.1/24 brd 10.0.4.255 scope global vmbr0
       valid_lft forever preferred_lft forever
</span><span class="gp">$</span><span class="w"> </span><span class="c"># View the listening ports used by our dnsmasq servers</span>
<span class="gp">$</span><span class="w"> </span><span class="nb">sudo </span>ss <span class="nt">-tulpan</span> | <span class="nb">grep </span>dnsmasq
<span class="go">udp UNCONN 0 0  127.0.0.1:53     0.0.0.0:* users:((&#34;dnsmasq&#34;,pid=1733930,fd=4))
udp UNCONN 0 0  10.0.4.1:53      0.0.0.0:* users:((&#34;dnsmasq&#34;,pid=1705267,fd=6))
udp UNCONN 0 0  0.0.0.0%vmbr0:67 0.0.0.0:* users:((&#34;dnsmasq&#34;,pid=1705267,fd=4))
tcp LISTEN 0 32 10.0.4.1:53      0.0.0.0:* users:((&#34;dnsmasq&#34;,pid=1705267,fd=7))
tcp LISTEN 0 32 127.0.0.1:53     0.0.0.0:* users:((&#34;dnsmasq&#34;,pid=1733930,fd=5))
</span><span class="gp">$</span><span class="w"> </span><span class="c"># Verify that the DNS server works on the vmbr0 address</span>
<span class="gp">$</span><span class="w"> </span>host www.debian.org 10.0.4.1
<span class="go">Name: 10.0.4.1
</span><span class="gp">Address: 10.0.4.1#</span>53
<span class="go">Aliases:

www.debian.org has address 130.89.148.77
www.debian.org has IPv6 address 2001:67c:2564:a119::77</span></code></pre></div></section>
<section class="doc-section level-1"><h2 id="_managing_running_systems">Managing running systems</h2><p>If we want to update DNS entries and/or MAC addresses we can edit the
<code>/etc/hosts</code> and <code>/etc/ethers</code> files and reload the <code>dnsmasq</code> configuration
using the <code>vmbridge</code> script:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">sudo</span> /usr/local/sbin/vmbridge vmbr0 reload</code></pre></div>
<p>That call sends a signal to the running <code>dnsmasq</code> server and it reloads the
files; after that we can refresh the DHCP addresses from the client machines or
start using the new DNS names immediately.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Kubernetes Static Content Server</title>
      <link>https://blogops.mixinet.net/posts/k8s_static_content_server/</link>
      <pubDate>Mon, 26 Sep 2022 00:12:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/k8s_static_content_server/</guid>
      <description>&lt;p&gt;This post describes how I’ve put together a simple static content server for
kubernetes clusters using a Pod with a persistent volume and multiple
containers: an &lt;strong&gt;sftp&lt;/strong&gt; server to manage contents, a &lt;strong&gt;web server&lt;/strong&gt; to publish them
with optional access control and another one to run scripts which need access
to the volume filesystem.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;sftp&lt;/strong&gt; server runs using
&lt;a href=&#34;https://https://github.com/mysecureshell/mysecureshell&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;MySecureShell&lt;/a&gt;, the &lt;strong&gt;web
server&lt;/strong&gt; is &lt;a href=&#34;https://nginx.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;nginx&lt;/a&gt; and the script runner uses the
&lt;a href=&#34;https://github.com/adnanh/webhook&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;webhook&lt;/a&gt; tool to publish endpoints to call
them (the calls will come from other Pods that run backend servers or are
executed from &lt;code&gt;Jobs&lt;/code&gt; or &lt;code&gt;CronJobs&lt;/code&gt;).&lt;/p&gt;
&lt;aside class=&#34;admonition-block note&#34; role=&#34;note&#34;&gt;&lt;h6 class=&#34;block-title label-only&#34;&gt;&lt;span class=&#34;title-label&#34;&gt;Note: &lt;/span&gt;&lt;/h6&gt;&lt;p&gt;This service has been developed for &lt;a href=&#34;https://about.kyso.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Kyso&lt;/a&gt; and the
version used in our current
&lt;a href=&#34;https://docs.kyso.io/getting-started/architecture-and-deployment&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;architecture&lt;/a&gt;
includes an additional container to index documents for
&lt;a href=&#34;https://www.elastic.co/elasticsearch/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Elasticsearch&lt;/a&gt;, but as it is not
relevant for the description of the service as a general solution I’ve decided
to ignore it on this post.&lt;/p&gt;&lt;/aside&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_history&#34;&gt;History&lt;/h2&gt;&lt;p&gt;The system was developed because we had a &lt;code&gt;NodeJS&lt;/code&gt; API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.&lt;/p&gt;
&lt;p&gt;Thinking about our requirements the use of a regular filesystem to keep the
files and folders was a good option, as uploading and serving files is simple.&lt;/p&gt;
&lt;p&gt;For the upload I decided to use the &lt;strong&gt;sftp&lt;/strong&gt; protocol, mainly because I already
had an &lt;strong&gt;sftp&lt;/strong&gt; container image based on
&lt;a href=&#34;https://github.com/mysecureshell/mysecureshell&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;mysecureshell&lt;/a&gt; prepared; once
we settled on that we added &lt;strong&gt;sftp&lt;/strong&gt; support to the API server and configured it
to upload the files to our server instead of using &lt;em&gt;S3 buckets&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;To publish the files we added a &lt;a href=&#34;https://nginx.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;nginx&lt;/a&gt; container configured
to work as a reverse proxy that uses the
&lt;a href=&#34;https://nginx.org/en/docs/http/ngx_http_auth_request_module.html&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;ngx_http_auth_request_module&lt;/a&gt;
to validate access to the files (the sub request is configurable, in our
deployment we have configured it to call our API to check if the user can
access a given URL).&lt;/p&gt;
&lt;p&gt;Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using &lt;code&gt;kubectl exec&lt;/code&gt; with the existing containers
did not seem a good idea, as that is not supported by &lt;code&gt;CronJobs&lt;/code&gt; objects, for
example).&lt;/p&gt;
&lt;p&gt;The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was
to use the &lt;a href=&#34;https://github.com/adnanh/webhook&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;webhook&lt;/a&gt; tool to provide the
endpoints to call the scripts; for now we have three:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;one to get the disc usage of a &lt;code&gt;PATH&lt;/code&gt;,&lt;/li&gt;&lt;li&gt;one to &lt;code&gt;hardlink&lt;/code&gt; all the files that are identical on the filesystem,&lt;/li&gt;&lt;li&gt;one to copy files and folders from S3 buckets to our filesystem.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>This post describes how I’ve put together a simple static content server for
kubernetes clusters using a Pod with a persistent volume and multiple
containers: an <strong>sftp</strong> server to manage contents, a <strong>web server</strong> to publish them
with optional access control and another one to run scripts which need access
to the volume filesystem.</p>
<p>The <strong>sftp</strong> server runs using
<a href="https://https://github.com/mysecureshell/mysecureshell" target="_blank" rel="noopener">MySecureShell</a>, the <strong>web
server</strong> is <a href="https://nginx.org/" target="_blank" rel="noopener">nginx</a> and the script runner uses the
<a href="https://github.com/adnanh/webhook" target="_blank" rel="noopener">webhook</a> tool to publish endpoints to call
them (the calls will come from other Pods that run backend servers or are
executed from <code>Jobs</code> or <code>CronJobs</code>).</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>This service has been developed for <a href="https://about.kyso.io/" target="_blank" rel="noopener">Kyso</a> and the
version used in our current
<a href="https://docs.kyso.io/getting-started/architecture-and-deployment" target="_blank" rel="noopener">architecture</a>
includes an additional container to index documents for
<a href="https://www.elastic.co/elasticsearch/" target="_blank" rel="noopener">Elasticsearch</a>, but as it is not
relevant for the description of the service as a general solution I’ve decided
to ignore it on this post.</p></aside>
<section class="doc-section level-1"><h2 id="_history">History</h2><p>The system was developed because we had a <code>NodeJS</code> API with endpoints to upload
files and store them on S3 compatible services that were later accessed via
HTTPS, but the requirements changed and we needed to be able to publish folders
instead of individual files using their original names and apply access
restrictions using our API.</p>
<p>Thinking about our requirements the use of a regular filesystem to keep the
files and folders was a good option, as uploading and serving files is simple.</p>
<p>For the upload I decided to use the <strong>sftp</strong> protocol, mainly because I already
had an <strong>sftp</strong> container image based on
<a href="https://github.com/mysecureshell/mysecureshell" target="_blank" rel="noopener">mysecureshell</a> prepared; once
we settled on that we added <strong>sftp</strong> support to the API server and configured it
to upload the files to our server instead of using <em>S3 buckets</em>.</p>
<p>To publish the files we added a <a href="https://nginx.org/" target="_blank" rel="noopener">nginx</a> container configured
to work as a reverse proxy that uses the
<a href="https://nginx.org/en/docs/http/ngx_http_auth_request_module.html" target="_blank" rel="noopener">ngx_http_auth_request_module</a>
to validate access to the files (the sub request is configurable, in our
deployment we have configured it to call our API to check if the user can
access a given URL).</p>
<p>Finally we added a third container when we needed to execute some tasks
directly on the filesystem (using <code>kubectl exec</code> with the existing containers
did not seem a good idea, as that is not supported by <code>CronJobs</code> objects, for
example).</p>
<p>The solution we found avoiding the NIH Syndrome (i.e. write our own tool) was
to use the <a href="https://github.com/adnanh/webhook" target="_blank" rel="noopener">webhook</a> tool to provide the
endpoints to call the scripts; for now we have three:</p>
<div class="ulist"><ul><li>one to get the disc usage of a <code>PATH</code>,</li><li>one to <code>hardlink</code> all the files that are identical on the filesystem,</li><li>one to copy files and folders from S3 buckets to our filesystem.</li></ul></div></section>
<section class="doc-section level-1"><h2 id="_container_definitions">Container definitions</h2><section class="doc-section level-2"><h3 id="_mysecureshell">mysecureshell</h3><p>The <code>mysecureshell</code> container can be used to provide an <strong>sftp</strong> service with
multiple users (although the files are owned by the same <code>UID</code> and <code>GID</code>) using
standalone containers (launched with <code>docker</code> or <code>podman</code>) or in an
orchestration system like kubernetes, as we are going to do here.</p>
<p>The image is generated using the following <code>Dockerfile</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="dockerfile"><span class="k">ARG</span><span class="s"> ALPINE_VERSION=3.16.2</span>

<span class="k">FROM</span><span class="w"> </span><span class="s">alpine:$ALPINE_VERSION</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="s">builder</span>
<span class="k">LABEL</span><span class="s"> maintainer=&#34;Sergio Talens-Oliag &lt;sto@mixinet.net&gt;&#34;</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> apk add <span class="nt">--no-cache</span> alpine-sdk git musl-dev <span class="o">&amp;&amp;</span><span class="se">\
</span> git clone https://github.com/sto/mysecureshell.git <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">cd </span>mysecureshell <span class="o">&amp;&amp;</span><span class="se">\
</span> ./configure <span class="nt">--prefix</span><span class="o">=</span>/usr <span class="nt">--sysconfdir</span><span class="o">=</span>/etc <span class="nt">--mandir</span><span class="o">=</span>/usr/share/man<span class="se">\
</span> <span class="nt">--localstatedir</span><span class="o">=</span>/var <span class="nt">--with-shutfile</span><span class="o">=</span>/var/lib/misc/sftp.shut <span class="nt">--with-debug</span><span class="o">=</span>2 <span class="o">&amp;&amp;</span><span class="se">\
</span> make all <span class="o">&amp;&amp;</span> make <span class="nb">install</span> <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/cache/apk/<span class="k">*</span>

<span class="k">FROM</span><span class="s"> alpine:$ALPINE_VERSION</span>
<span class="k">LABEL</span><span class="s"> maintainer=&#34;Sergio Talens-Oliag &lt;sto@mixinet.net&gt;&#34;</span>
<span class="k">COPY</span><span class="s"> --from=builder /usr/bin/mysecureshell /usr/bin/mysecureshell</span>
<span class="k">COPY</span><span class="s"> --from=builder /usr/bin/sftp-* /usr/bin/</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> apk add <span class="nt">--no-cache</span> openssh shadow pwgen <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">sed</span> <span class="nt">-i</span> <span class="nt">-e</span> <span class="s2">&#34;s|^.*</span><span class="se">\(</span><span class="s2">AuthorizedKeysFile</span><span class="se">\)</span><span class="s2">.*</span><span class="nv">$|</span><span class="se">\1</span><span class="s2"> /etc/ssh/auth_keys/%u|&#34;</span><span class="se">\
</span> /etc/ssh/sshd_config <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">mkdir</span> /etc/ssh/auth_keys <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">cat</span> /dev/null <span class="o">&gt;</span> /etc/motd <span class="o">&amp;&amp;</span><span class="se">\
</span> add-shell <span class="s1">&#39;/usr/bin/mysecureshell&#39;</span> <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/cache/apk/<span class="k">*</span>
<span class="k">COPY</span><span class="s"> bin/* /usr/local/bin/</span>
<span class="k">COPY</span><span class="s"> etc/sftp_config /etc/ssh/</span>
<span class="k">COPY</span><span class="s"> entrypoint.sh /</span>
<span class="k">EXPOSE</span><span class="s"> 22</span>
<span class="k">VOLUME</span><span class="s"> /sftp</span>
<span class="k">ENTRYPOINT</span><span class="s"> [&#34;/entrypoint.sh&#34;]</span>
<span class="k">CMD</span><span class="s"> [&#34;server&#34;]</span></code></pre></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>Initially the container used the <code>mysecureshell</code> package included in <code>alpine</code>,
but we wanted to be able to create hardlinks from the client and the support
is only available on the <code>master</code> branch of the source repository, that is why
we are compiling our own binary using a
<a href="https://docs.docker.com/build/building/multi-stage/">multi-stage</a> <code>Dockerfile</code>.</p>
<p>Note that we are cloning the source from a fork that includes this
<a href="https://github.com/mysecureshell/mysecureshell/pull/116" target="_blank" rel="noopener">pull request</a> because
we had to fix a couple of minor issues to make the <code>ln</code> command work as
expected.</p></aside>
<p>The <code>/etc/sftp_config</code> file is used to
<a href="https://mysecureshell.readthedocs.io/en/latest/configuration.html" target="_blank" rel="noopener">configure</a>
the <code>mysecureshell</code> server to have all the user homes under <code>/sftp/data</code>, only
allow them to see the files under their home directories as if it were at the
root of the server and close idle connections after <code>5m</code> of inactivity:</p>
<figure class="listing-block"><figcaption>etc/sftp_config</figcaption>
<pre class="rouge highlight"><code data-lang="apache"><span class="c"># Default mysecureshell configuration</span>
<span class="p">&lt;</span><span class="nl">Default</span><span class="p">&gt;
</span>   <span class="c"># All users will have access their home directory under /sftp/data</span>
   Home /sftp/data/$USER
   <span class="c"># Log to a file inside /sftp/logs/ (only works when the directory exists)</span>
   LogFile /sftp/logs/mysecureshell.log
   <span class="c"># Force users to stay in their home directory</span>
   StayAtHome true
   <span class="c"># Hide Home PATH, it will be shown as /</span>
   VirtualChroot true
   <span class="c"># Hide real file/directory owner (just change displayed permissions)</span>
   DirFakeUser true
   <span class="c"># Hide real file/directory group (just change displayed permissions)</span>
   DirFakeGroup true
   <span class="c"># We do not want users to keep forever their idle connection</span>
   IdleTimeOut 5m
<span class="p">&lt;/</span><span class="nl">Default</span><span class="p">&gt;
</span><span class="c"># vim: ts=2:sw=2:et</span></code></pre></figure>
<p>The <code>entrypoint.sh</code> script is the one responsible to prepare the container for
the users included on the <code>/secrets/user_pass.txt</code> file (creates the users with
their <code>HOME</code> directories under <code>/sftp/data</code> and a <code>/bin/false</code> shell and
creates the key files from <code>/secrets/user_keys.txt</code> if available).</p>
<p>The script expects a couple of environment variables:</p>
<div class="ulist"><ul><li><code>SFTP_UID</code>: <code>UID</code> used to run the daemon and for all the files, it has to be
different than <code>0</code> (all the files managed by this daemon are going to be
owned by the same user and group, even if the remote users are different).</li><li><code>SFTP_GID</code>: <code>GID</code> used to run the daemon and for all the files, it has to be
different than <code>0</code>.</li></ul></div>
<p>And can use the <code>SSH_PORT</code> and <code>SSH_PARAMS</code> values if present.</p>
<p>It also requires the following files (they can be mounted as secrets in
kubernetes):</p>
<div class="ulist"><ul><li><code>/secrets/host_keys.txt</code>: Text file containing the ssh server keys in mime
format; the file is processed using the <code>reformime</code> utility (the one included
on <a href="https://www.busybox.net/" target="_blank" rel="noopener">busybox</a>) and can be generated using the
<code>gen-host-keys</code> script included on the container (it uses <code>ssh-keygen</code> and
<code>makemime</code>).</li><li><code>/secrets/user_pass.txt</code>: Text file containing lines of the form
<code>username:password_in_clear_text</code> (only the users included on this file are
available on the <code>sftp</code> server, in fact in our deployment we use only the
<code>scs</code> user for everything).</li></ul></div>
<p>And optionally can use another one:</p>
<div class="ulist"><ul><li><code>/secrets/user_keys.txt</code>: Text file that contains lines of the form
<code>username:public_ssh_ed25519_or_rsa_key</code>; the public keys are installed on
the server and can be used to log into the <code>sftp</code> server if the <code>username</code>
exists on the <code>user_pass.txt</code> file.</li></ul></div>
<p>The contents of the <code>entrypoint.sh</code> script are:</p>
<figure class="listing-block"><figcaption>entrypoint.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>
<span class="c"># Expects SSH_UID &amp; SSH_GID on the environment and uses the value of the</span>
<span class="c"># SSH_PORT &amp; SSH_PARAMS variables if present</span>
<span class="c"># SSH_PARAMS</span>
<span class="nv">SSH_PARAMS</span><span class="o">=</span><span class="s2">&#34;-D -e -p </span><span class="k">${</span><span class="nv">SSH_PORT</span>:<span class="p">=22</span><span class="k">}</span><span class="s2"> </span><span class="k">${</span><span class="nv">SSH_PARAMS</span><span class="k">}</span><span class="s2">&#34;</span>
<span class="c"># Fixed values</span>
<span class="c"># DIRECTORIES</span>
<span class="nv">HOME_DIR</span><span class="o">=</span><span class="s2">&#34;/sftp/data&#34;</span>
<span class="nv">CONF_FILES_DIR</span><span class="o">=</span><span class="s2">&#34;/secrets&#34;</span>
<span class="nv">AUTH_KEYS_PATH</span><span class="o">=</span><span class="s2">&#34;/etc/ssh/auth_keys&#34;</span>
<span class="c"># FILES</span>
<span class="nv">HOST_KEYS</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$CONF_FILES_DIR</span><span class="s2">/host_keys.txt&#34;</span>
<span class="nv">USER_KEYS</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$CONF_FILES_DIR</span><span class="s2">/user_keys.txt&#34;</span>
<span class="nv">USER_PASS</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$CONF_FILES_DIR</span><span class="s2">/user_pass.txt&#34;</span>
<span class="nv">USER_SHELL_CMD</span><span class="o">=</span><span class="s2">&#34;/usr/bin/mysecureshell&#34;</span>
<span class="c"># TYPES</span>
<span class="nv">HOST_KEY_TYPES</span><span class="o">=</span><span class="s2">&#34;dsa ecdsa ed25519 rsa&#34;</span>
<span class="c"># ---------</span>
<span class="c"># FUNCTIONS</span>
<span class="c"># ---------</span>
<span class="c"># Validate HOST_KEYS, USER_PASS, SFTP_UID and SFTP_GID</span>
_check_environment<span class="o">()</span> <span class="o">{</span>
  <span class="c"># Check the ssh server keys ... we don&#39;t boot if we don&#39;t have them</span>
  <span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$HOST_KEYS</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
We need the host keys on the &#39;</span><span class="nv">$HOST_KEYS</span><span class="sh">&#39; file to proceed.

Call the &#39;gen-host-keys&#39; script to create and export them on a mime file.
</span><span class="no">EOF
</span>    <span class="nb">exit </span>1
  <span class="k">fi</span>
  <span class="c"># Check that we have users ... if we don&#39;t we can&#39;t continue</span>
  <span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$USER_PASS</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
We need at least the &#39;</span><span class="nv">$USER_PASS</span><span class="sh">&#39; file to provision users.

Call the &#39;gen-users-tar&#39; script to create a tar file to create an archive that
contains public and private keys for users, a &#39;user_keys.txt&#39; with the public
keys of the users and a &#39;user_pass.txt&#39; file with random passwords for them 
(pass the list of usernames to it).
</span><span class="no">EOF
</span>    <span class="nb">exit </span>1
  <span class="k">fi</span>
  <span class="c"># Check SFTP_UID</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$SFTP_UID</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;The &#39;SFTP_UID&#39; can&#39;t be empty, pass a &#39;GID&#39;.&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi
  if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$SFTP_UID</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;The &#39;SFTP_UID&#39; can&#39;t be 0, use a different &#39;UID&#39;&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi</span>
  <span class="c"># Check SFTP_GID</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$SFTP_GID</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;The &#39;SFTP_GID&#39; can&#39;t be empty, pass a &#39;GID&#39;.&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi
  if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$SFTP_GID</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;The &#39;SFTP_GID&#39; can&#39;t be 0, use a different &#39;GID&#39;&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi</span>
<span class="o">}</span>
<span class="c"># Adjust ssh host keys</span>
_setup_host_keys<span class="o">()</span> <span class="o">{</span>
  <span class="nv">opwd</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="nv">tmpdir</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">mktemp</span> <span class="nt">-d</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$tmpdir</span><span class="s2">&#34;</span>
  <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;0&#34;</span>
  reformime &lt;<span class="s2">&#34;</span><span class="nv">$HOST_KEYS</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;1&#34;</span>
  <span class="k">for </span>kt <span class="k">in</span> <span class="nv">$HOST_KEY_TYPES</span><span class="p">;</span> <span class="k">do
    </span><span class="nv">key</span><span class="o">=</span><span class="s2">&#34;ssh_host_</span><span class="k">${</span><span class="nv">kt</span><span class="k">}</span><span class="s2">_key&#34;</span>
    <span class="nv">pub</span><span class="o">=</span><span class="s2">&#34;ssh_host_</span><span class="k">${</span><span class="nv">kt</span><span class="k">}</span><span class="s2">_key.pub&#34;</span>
    <span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$key</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
      </span><span class="nb">echo</span> <span class="s2">&#34;Missing &#39;</span><span class="nv">$key</span><span class="s2">&#39; file&#34;</span>
      <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;1&#34;</span>
    <span class="k">fi
    if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$pub</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
      </span><span class="nb">echo</span> <span class="s2">&#34;Missing &#39;</span><span class="nv">$pub</span><span class="s2">&#39; file&#34;</span>
      <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;1&#34;</span>
    <span class="k">fi
    if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
      continue
    fi
    </span><span class="nb">cat</span> <span class="s2">&#34;</span><span class="nv">$key</span><span class="s2">&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;/etc/ssh/</span><span class="nv">$key</span><span class="s2">&#34;</span>
    <span class="nb">chmod </span>0600 <span class="s2">&#34;/etc/ssh/</span><span class="nv">$key</span><span class="s2">&#34;</span>
    <span class="nb">chown </span>root:root <span class="s2">&#34;/etc/ssh/</span><span class="nv">$key</span><span class="s2">&#34;</span>
    <span class="nb">cat</span> <span class="s2">&#34;</span><span class="nv">$pub</span><span class="s2">&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;/etc/ssh/</span><span class="nv">$pub</span><span class="s2">&#34;</span>
    <span class="nb">chmod </span>0600 <span class="s2">&#34;/etc/ssh/</span><span class="nv">$pub</span><span class="s2">&#34;</span>
    <span class="nb">chown </span>root:root <span class="s2">&#34;/etc/ssh/</span><span class="nv">$pub</span><span class="s2">&#34;</span>
  <span class="k">done
  </span><span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$opwd</span><span class="s2">&#34;</span>
  <span class="nb">rm</span> <span class="nt">-rf</span> <span class="s2">&#34;</span><span class="nv">$tmpdir</span><span class="s2">&#34;</span>
  <span class="k">return</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span>
<span class="o">}</span>
<span class="c"># Create users</span>
_setup_user_pass<span class="o">()</span> <span class="o">{</span>
  <span class="nv">opwd</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="nv">tmpdir</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">mktemp</span> <span class="nt">-d</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$tmpdir</span><span class="s2">&#34;</span>
  <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;0&#34;</span>
  <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$HOME_DIR</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$HOME_DIR</span><span class="s2">&#34;</span>
  <span class="c"># Make sure the data dir can be managed by the sftp user</span>
  <span class="nb">chown</span> <span class="s2">&#34;</span><span class="nv">$SFTP_UID</span><span class="s2">:</span><span class="nv">$SFTP_GID</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$HOME_DIR</span><span class="s2">&#34;</span>
  <span class="c"># Allow the user (and root) to create directories inside the $HOME_DIR, if</span>
  <span class="c"># we don&#39;t allow it the directory creation fails on EFS (AWS)</span>
  <span class="nb">chmod </span>0755 <span class="s2">&#34;</span><span class="nv">$HOME_DIR</span><span class="s2">&#34;</span>
  <span class="c"># Create users</span>
  <span class="nb">echo</span> <span class="s2">&#34;sftp:sftp:</span><span class="nv">$SFTP_UID</span><span class="s2">:</span><span class="nv">$SFTP_GID</span><span class="s2">:::/bin/false&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;newusers.txt&#34;</span>
  <span class="nb">sed</span> <span class="nt">-n</span> <span class="s2">&#34;/^[^#]/ { s/:/ /p }&#34;</span> <span class="s2">&#34;</span><span class="nv">$USER_PASS</span><span class="s2">&#34;</span> | <span class="k">while </span><span class="nb">read</span> <span class="nt">-r</span> _u _p<span class="p">;</span> <span class="k">do
    </span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$_u</span><span class="s2">:</span><span class="nv">$_p</span><span class="s2">:</span><span class="nv">$SFTP_UID</span><span class="s2">:</span><span class="nv">$SFTP_GID</span><span class="s2">::</span><span class="nv">$HOME_DIR</span><span class="s2">/</span><span class="nv">$_u</span><span class="s2">:</span><span class="nv">$USER_SHELL_CMD</span><span class="s2">&#34;</span>
  <span class="k">done</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;newusers.txt&#34;</span>
  newusers <span class="nt">--badnames</span> newusers.txt
  <span class="c"># Disable write permission on the directory to forbid remote sftp users to</span>
  <span class="c"># remove their own root dir (they have already done it); we adjust that</span>
  <span class="c"># here to avoid issues with EFS (see before)</span>
  <span class="nb">chmod </span>0555 <span class="s2">&#34;</span><span class="nv">$HOME_DIR</span><span class="s2">&#34;</span>
  <span class="c"># Clean up the tmpdir</span>
  <span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$opwd</span><span class="s2">&#34;</span>
  <span class="nb">rm</span> <span class="nt">-rf</span> <span class="s2">&#34;</span><span class="nv">$tmpdir</span><span class="s2">&#34;</span>
  <span class="k">return</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span>
<span class="o">}</span>
<span class="c"># Adjust user keys</span>
_setup_user_keys<span class="o">()</span> <span class="o">{</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$USER_KEYS</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">sed</span> <span class="nt">-n</span> <span class="s2">&#34;/^[^#]/ { s/:/ /p }&#34;</span> <span class="s2">&#34;</span><span class="nv">$USER_KEYS</span><span class="s2">&#34;</span> | <span class="k">while </span><span class="nb">read</span> <span class="nt">-r</span> _u _k<span class="p">;</span> <span class="k">do
      </span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$_k</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$AUTH_KEYS_PATH</span><span class="s2">/</span><span class="nv">$_u</span><span class="s2">&#34;</span>
    <span class="k">done
  fi</span>
<span class="o">}</span>
<span class="c"># Main function</span>
exec_sshd<span class="o">()</span> <span class="o">{</span>
  _check_environment
  _setup_host_keys
  _setup_user_pass
  _setup_user_keys
  <span class="nb">echo</span> <span class="s2">&#34;Running: /usr/sbin/sshd </span><span class="nv">$SSH_PARAMS</span><span class="s2">&#34;</span>
  <span class="c"># shellcheck disable=SC2086</span>
  <span class="nb">exec</span> /usr/sbin/sshd <span class="nt">-D</span> <span class="nv">$SSH_PARAMS</span>
<span class="o">}</span>
<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>
<span class="k">case</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span> <span class="k">in</span>
<span class="s2">&#34;server&#34;</span><span class="p">)</span> exec_sshd <span class="p">;;</span>
<span class="k">*</span><span class="p">)</span> <span class="nb">exec</span> <span class="s2">&#34;</span><span class="nv">$@</span><span class="s2">&#34;</span> <span class="p">;;</span>
<span class="k">esac</span>
<span class="c"># vim: ts=2:sw=2:et</span></code></pre></figure>
<p>The container also includes a couple of auxiliary scripts, the first one can be
used to generate the <code>host_keys.txt</code> file as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>docker run <span class="nt">--rm</span> stodh/mysecureshell gen-host-keys <span class="o">&gt;</span> host_keys.txt</code></pre></div>
<p>Where the script is as simple as:</p>
<figure class="listing-block"><figcaption>bin/gen-host-keys</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="c"># Generate new host keys</span>
ssh-keygen <span class="nt">-A</span> <span class="o">&gt;</span>/dev/null
<span class="c"># Replace hostname</span>
<span class="nb">sed</span> <span class="nt">-i</span> <span class="nt">-e</span> <span class="s1">&#39;s/@.*$/@mysecureshell/&#39;</span> /etc/ssh/ssh_host_<span class="k">*</span>_key.pub
<span class="c"># Print in mime format (stdout)</span>
makemime /etc/ssh/ssh_host_<span class="k">*</span>
<span class="c"># vim: ts=2:sw=2:et</span></code></pre></figure>
<p>And there is another script to generate a <code>.tar</code> file that contains auth data
for the list of usernames passed to it (the file contains a <code>user_pass.txt</code>
file with random passwords for the users, public and private ssh keys for them
and the <code>user_keys.txt</code> file that matches the generated keys).</p>
<p>To generate a <code>tar</code> file for the user <code>scs</code> we can execute the following:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>docker run <span class="nt">--rm</span> stodh/mysecureshell gen-users-tar scs <span class="o">&gt;</span> /tmp/scs-users.tar</code></pre></div>
<p>To see the contents and the text inside the <code>user_pass.txt</code> file we can do:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">tar </span>tvf /tmp/scs-users.tar
<span class="go">-rw-r--r-- root/root        21 2022-09-11 15:55 user_pass.txt
-rw-r--r-- root/root       822 2022-09-11 15:55 user_keys.txt
-rw------- root/root       387 2022-09-11 15:55 id_ed25519-scs
-rw-r--r-- root/root        85 2022-09-11 15:55 id_ed25519-scs.pub
-rw------- root/root      3357 2022-09-11 15:55 id_rsa-scs
-rw------- root/root      3243 2022-09-11 15:55 id_rsa-scs.pem
-rw-r--r-- root/root       729 2022-09-11 15:55 id_rsa-scs.pub
</span><span class="gp">$</span><span class="w"> </span><span class="nb">tar </span>xfO /tmp/scs-users.tar user_pass.txt
<span class="go">scs:20JertRSX2Eaar4x</span></code></pre></div>
<p>The source of the script is:</p>
<figure class="listing-block"><figcaption>bin/gen-users-tar</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>
<span class="nv">USER_KEYS_FILE</span><span class="o">=</span><span class="s2">&#34;user_keys.txt&#34;</span>
<span class="nv">USER_PASS_FILE</span><span class="o">=</span><span class="s2">&#34;user_pass.txt&#34;</span>
<span class="c"># ---------</span>
<span class="c"># MAIN CODE</span>
<span class="c"># ---------</span>
<span class="c"># Generate user passwords and keys, return 1 if no username is received</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;$#&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  return </span>1
<span class="k">fi
</span><span class="nv">opwd</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">pwd</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">tmpdir</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">mktemp</span> <span class="nt">-d</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$tmpdir</span><span class="s2">&#34;</span>
<span class="k">for </span>u <span class="k">in</span> <span class="s2">&#34;</span><span class="nv">$@</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">do
  </span>ssh-keygen <span class="nt">-q</span> <span class="nt">-a</span> 100 <span class="nt">-t</span> ed25519 <span class="nt">-f</span> <span class="s2">&#34;id_ed25519-</span><span class="nv">$u</span><span class="s2">&#34;</span> <span class="nt">-C</span> <span class="s2">&#34;</span><span class="nv">$u</span><span class="s2">&#34;</span> <span class="nt">-N</span> <span class="s2">&#34;&#34;</span>
  ssh-keygen <span class="nt">-q</span> <span class="nt">-a</span> 100 <span class="nt">-b</span> 4096 <span class="nt">-t</span> rsa <span class="nt">-f</span> <span class="s2">&#34;id_rsa-</span><span class="nv">$u</span><span class="s2">&#34;</span> <span class="nt">-C</span> <span class="s2">&#34;</span><span class="nv">$u</span><span class="s2">&#34;</span> <span class="nt">-N</span> <span class="s2">&#34;&#34;</span>
  <span class="c"># Legacy RSA private key format</span>
  <span class="nb">cp</span> <span class="nt">-a</span> <span class="s2">&#34;id_rsa-</span><span class="nv">$u</span><span class="s2">&#34;</span> <span class="s2">&#34;id_rsa-</span><span class="nv">$u</span><span class="s2">.pem&#34;</span>
  ssh-keygen <span class="nt">-q</span> <span class="nt">-p</span> <span class="nt">-m</span> pem <span class="nt">-f</span> <span class="s2">&#34;id_rsa-</span><span class="nv">$u</span><span class="s2">.pem&#34;</span> <span class="nt">-N</span> <span class="s2">&#34;&#34;</span> <span class="nt">-P</span> <span class="s2">&#34;&#34;</span> <span class="o">&gt;</span>/dev/null
  <span class="nb">chmod </span>0600 <span class="s2">&#34;id_rsa-</span><span class="nv">$u</span><span class="s2">.pem&#34;</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$u</span><span class="s2">:</span><span class="si">$(</span>pwgen <span class="nt">-s</span> 16 1<span class="si">)</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$USER_PASS_FILE</span><span class="s2">&#34;</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$u</span><span class="s2">:</span><span class="si">$(</span><span class="nb">cat</span> <span class="s2">&#34;id_ed25519-</span><span class="nv">$u</span><span class="s2">.pub&#34;</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$USER_KEYS_FILE</span><span class="s2">&#34;</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$u</span><span class="s2">:</span><span class="si">$(</span><span class="nb">cat</span> <span class="s2">&#34;id_rsa-</span><span class="nv">$u</span><span class="s2">.pub&#34;</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$USER_KEYS_FILE</span><span class="s2">&#34;</span>
<span class="k">done
</span><span class="nb">tar </span>cf - <span class="s2">&#34;</span><span class="nv">$USER_PASS_FILE</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$USER_KEYS_FILE</span><span class="s2">&#34;</span> id_<span class="k">*</span> 2&gt;/dev/null
<span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$opwd</span><span class="s2">&#34;</span>
<span class="nb">rm</span> <span class="nt">-rf</span> <span class="s2">&#34;</span><span class="nv">$tmpdir</span><span class="s2">&#34;</span>
<span class="c"># vim: ts=2:sw=2:et</span></code></pre></figure></section>
<section class="doc-section level-2"><h3 id="_nginx_scs">nginx-scs</h3><p>The <code>nginx-scs</code> container is generated using the following <code>Dockerfile</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="dockerfile"><span class="k">ARG</span><span class="s"> NGINX_VERSION=1.23.1</span>

<span class="k">FROM</span><span class="s"> nginx:$NGINX_VERSION</span>
<span class="k">LABEL</span><span class="s"> maintainer=&#34;Sergio Talens-Oliag &lt;sto@mixinet.net&gt;&#34;</span>
<span class="k">RUN </span><span class="nb">rm</span> <span class="nt">-f</span> /docker-entrypoint.d/<span class="k">*</span>
<span class="k">COPY</span><span class="s"> docker-entrypoint.d/* /docker-entrypoint.d/</span></code></pre></div>
<p>Basically we are removing the existing <code>docker-entrypoint.d</code> scripts from the
standard image and adding a new one that configures the web server as we want
using a couple of environment variables:</p>
<div class="ulist"><ul><li><code>AUTH_REQUEST_URI</code>: URL to use for the <code>auth_request</code>, if the variable is not
found on the environment <code>auth_request</code> is not used.</li><li><code>HTML_ROOT</code>: Base directory of the web server, if not passed the default
<code>/usr/share/nginx/html</code> is used.</li></ul></div>
<p>Note that if we don’t pass the variables everything works as if we were using
the original <code>nginx</code> image.</p>
<p>The contents of the configuration script are:</p>
<figure class="listing-block"><figcaption>docker-entrypoint.d/10-update-default-conf.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="c"># Replace the default.conf nginx file by our own version.</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$HTML_ROOT</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nv">HTML_ROOT</span><span class="o">=</span><span class="s2">&#34;/usr/share/nginx/html&#34;</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$AUTH_REQUEST_URI</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">cat</span> <span class="o">&gt;</span>/etc/nginx/conf.d/default.conf <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
server {
  listen       80;
  server_name  localhost;
  location / {
    auth_request /.auth;
    root  </span><span class="nv">$HTML_ROOT</span><span class="sh">;
    index index.html index.htm;
  }
  location /.auth {
    internal;
    proxy_pass </span><span class="nv">$AUTH_REQUEST_URI</span><span class="sh">;
    proxy_pass_request_body off;
    proxy_set_header Content-Length &#34;&#34;;
    proxy_set_header X-Original-URI </span><span class="se">\$</span><span class="sh">request_uri;
  }
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root /usr/share/nginx/html;
  }
}
</span><span class="no">EOF
</span><span class="k">else
  </span><span class="nb">cat</span> <span class="o">&gt;</span>/etc/nginx/conf.d/default.conf <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
server {
  listen       80;
  server_name  localhost;
  location / {
    root  </span><span class="nv">$HTML_ROOT</span><span class="sh">;
    index index.html index.htm;
  }
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
    root /usr/share/nginx/html;
  }
}
</span><span class="no">EOF
</span><span class="k">fi</span>
<span class="c"># vim: ts=2:sw=2:et</span></code></pre></figure>
<p>As we will see later the idea is to use the <code>/sftp/data</code> or <code>/sftp/data/scs</code>
folder as the root of the web published by this container and create an
<code>Ingress</code> object to provide access to it outside of our kubernetes cluster.</p></section>
<section class="doc-section level-2"><h3 id="_webhook_scs">webhook-scs</h3><p>The <code>webhook-scs</code> container is generated using the following <code>Dockerfile</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="dockerfile"><span class="k">ARG</span><span class="s"> ALPINE_VERSION=3.16.2</span>
<span class="k">ARG</span><span class="s"> GOLANG_VERSION=alpine3.16</span>

<span class="k">FROM</span><span class="w"> </span><span class="s">golang:$GOLANG_VERSION</span><span class="w"> </span><span class="k">AS</span><span class="w"> </span><span class="s">builder</span>
<span class="k">LABEL</span><span class="s"> maintainer=&#34;Sergio Talens-Oliag &lt;sto@mixinet.net&gt;&#34;</span>
<span class="k">ENV</span><span class="s"> WEBHOOK_VERSION 2.8.0</span>
<span class="k">ENV</span><span class="s"> WEBHOOK_PR 549</span>
<span class="k">ENV</span><span class="s"> S3FS_VERSION v1.91</span>
<span class="k">WORKDIR</span><span class="s"> /go/src/github.com/adnanh/webhook</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> apk add <span class="nt">--no-cache</span> <span class="nt">-t</span> build-deps curl libc-dev gcc libgcc patch
<span class="k">RUN </span>curl <span class="nt">-L</span> <span class="nt">--silent</span> <span class="nt">-o</span> webhook.tar.gz<span class="se">\
</span> https://github.com/adnanh/webhook/archive/<span class="k">${</span><span class="nv">WEBHOOK_VERSION</span><span class="k">}</span>.tar.gz <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">tar </span>xzf webhook.tar.gz <span class="nt">--strip</span> 1 <span class="o">&amp;&amp;</span><span class="se">\
</span> curl <span class="nt">-L</span> <span class="nt">--silent</span> <span class="nt">-o</span> <span class="k">${</span><span class="nv">WEBHOOK_PR</span><span class="k">}</span>.patch<span class="se">\
</span> https://patch-diff.githubusercontent.com/raw/adnanh/webhook/pull/<span class="k">${</span><span class="nv">WEBHOOK_PR</span><span class="k">}</span>.patch <span class="o">&amp;&amp;</span><span class="se">\
</span> patch <span class="nt">-p1</span> &lt; <span class="k">${</span><span class="nv">WEBHOOK_PR</span><span class="k">}</span>.patch <span class="o">&amp;&amp;</span><span class="se">\
</span> go get <span class="nt">-d</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span> go build <span class="nt">-o</span> /usr/local/bin/webhook
<span class="k">WORKDIR</span><span class="s"> /src/s3fs-fuse</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> apk add ca-certificates build-base alpine-sdk libcurl automake autoconf<span class="se">\
</span> libxml2-dev libressl-dev mailcap fuse-dev curl-dev
<span class="k">RUN </span>curl <span class="nt">-L</span> <span class="nt">--silent</span> <span class="nt">-o</span> s3fs.tar.gz<span class="se">\
</span> https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/<span class="nv">$S3FS_VERSION</span>.tar.gz <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">tar </span>xzf s3fs.tar.gz <span class="nt">--strip</span> 1 <span class="o">&amp;&amp;</span><span class="se">\
</span> ./autogen.sh <span class="o">&amp;&amp;</span><span class="se">\
</span> ./configure <span class="nt">--prefix</span><span class="o">=</span>/usr/local <span class="o">&amp;&amp;</span><span class="se">\
</span> make <span class="nt">-j</span> <span class="o">&amp;&amp;</span> <span class="se">\
</span> make <span class="nb">install</span>

<span class="k">FROM</span><span class="s"> alpine:$ALPINE_VERSION</span>
<span class="k">LABEL</span><span class="s"> maintainer=&#34;Sergio Talens-Oliag &lt;sto@mixinet.net&gt;&#34;</span>
<span class="k">WORKDIR</span><span class="s"> /webhook</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> apk add <span class="nt">--no-cache</span> ca-certificates mailcap fuse libxml2 libcurl libgcc<span class="se">\
</span> libstdc++ rsync util-linux-misc <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/cache/apk/<span class="k">*</span>
<span class="k">COPY</span><span class="s"> --from=builder /usr/local/bin/webhook /usr/local/bin/webhook</span>
<span class="k">COPY</span><span class="s"> --from=builder /usr/local/bin/s3fs /usr/local/bin/s3fs</span>
<span class="k">COPY</span><span class="s"> entrypoint.sh /</span>
<span class="k">COPY</span><span class="s"> hooks/* ./hooks/</span>
<span class="k">EXPOSE</span><span class="s"> 9000</span>
<span class="k">ENTRYPOINT</span><span class="s"> [&#34;/entrypoint.sh&#34;]</span>
<span class="k">CMD</span><span class="s"> [&#34;server&#34;]</span></code></pre></div>
<p>Again, we use a multi-stage build because in production we wanted to support a
functionality that is not already on the official versions (streaming the
command output as a response instead of waiting until the execution ends); this
time we build the image applying the <code>PATCH</code> included on this
<a href="https://github.com/adnanh/webhook/pull/549" target="_blank" rel="noopener">pull request</a> against a released
version of the source instead of creating a fork.</p>
<p>The <code>entrypoint.sh</code> script is used to generate the <code>webhook</code> configuration file
for the existing <code>hooks</code> using environment variables (basically the
<code>WEBHOOK_WORKDIR</code> and the <code>*_TOKEN</code> variables) and launch the <code>webhook</code>
service:</p>
<figure class="listing-block"><figcaption>entrypoint.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>
<span class="nv">WEBHOOK_BIN</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">WEBHOOK_BIN</span><span class="k">:-</span><span class="p">/webhook/hooks</span><span class="k">}</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_YML</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">WEBHOOK_YML</span><span class="k">:-</span><span class="p">/webhook/scs.yml</span><span class="k">}</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_OPTS</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">WEBHOOK_OPTS</span><span class="k">:-</span><span class="p">-verbose</span><span class="k">}</span><span class="s2">&#34;</span>
<span class="c"># ---------</span>
<span class="c"># FUNCTIONS</span>
<span class="c"># ---------</span>
print_du_yml<span class="o">()</span> <span class="o">{</span>
  <span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
- id: du
  execute-command: &#39;</span><span class="nv">$WEBHOOK_BIN</span><span class="sh">/du.sh&#39;
  command-working-directory: &#39;</span><span class="nv">$WORKDIR</span><span class="sh">&#39;
  response-headers:
  - name: &#39;Content-Type&#39;
    value: &#39;application/json&#39;
  http-methods: [&#39;GET&#39;]
  include-command-output-in-response: true
  include-command-output-in-response-on-error: true
  pass-arguments-to-command:
  - source: &#39;url&#39;
    name: &#39;path&#39;
  pass-environment-to-command:
  - source: &#39;string&#39;
    envname: &#39;OUTPUT_FORMAT&#39;
    name: &#39;json&#39;
</span><span class="no">EOF
</span><span class="o">}</span>
print_hardlink_yml<span class="o">()</span> <span class="o">{</span>
  <span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
- id: hardlink
  execute-command: &#39;</span><span class="nv">$WEBHOOK_BIN</span><span class="sh">/hardlink.sh&#39;
  command-working-directory: &#39;</span><span class="nv">$WORKDIR</span><span class="sh">&#39;
  http-methods: [&#39;GET&#39;]
  include-command-output-in-response: true
  include-command-output-in-response-on-error: true
</span><span class="no">EOF
</span><span class="o">}</span>
print_s3sync_yml<span class="o">()</span> <span class="o">{</span>
  <span class="nb">cat</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
- id: s3sync
  execute-command: &#39;</span><span class="nv">$WEBHOOK_BIN</span><span class="sh">/s3sync.sh&#39;
  command-working-directory: &#39;</span><span class="nv">$WORKDIR</span><span class="sh">&#39;
  http-methods: [&#39;POST&#39;]
  include-command-output-in-response: true
  include-command-output-in-response-on-error: true
  pass-environment-to-command:
  - source: &#39;payload&#39;
    envname: &#39;AWS_KEY&#39;
    name: &#39;aws.key&#39;
  - source: &#39;payload&#39;
    envname: &#39;AWS_SECRET_KEY&#39;
    name: &#39;aws.secret_key&#39;
  - source: &#39;payload&#39;
    envname: &#39;S3_BUCKET&#39;
    name: &#39;s3.bucket&#39;
  - source: &#39;payload&#39;
    envname: &#39;S3_REGION&#39;
    name: &#39;s3.region&#39;
  - source: &#39;payload&#39;
    envname: &#39;S3_PATH&#39;
    name: &#39;s3.path&#39;
  - source: &#39;payload&#39;
    envname: &#39;SCS_PATH&#39;
    name: &#39;scs.path&#39;
  stream-command-output: true
</span><span class="no">EOF
</span><span class="o">}</span>
print_token_yml<span class="o">()</span> <span class="o">{</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">cat</span> <span class="o">&lt;&lt;</span> <span class="no">EOF</span><span class="sh">
  trigger-rule:
    match:
      type: &#39;value&#39;
      value: &#39;</span><span class="nv">$1</span><span class="sh">&#39;
      parameter:
        source: &#39;header&#39;
        name: &#39;X-Webhook-Token&#39;
</span><span class="no">EOF
</span>  <span class="k">fi</span>
<span class="o">}</span>
exec_webhook<span class="o">()</span> <span class="o">{</span>
  <span class="c"># Validate WORKDIR</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_WORKDIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;Must define the WEBHOOK_WORKDIR variable!&#34;</span> <span class="o">&gt;</span>&amp;2
    <span class="nb">exit </span>1
  <span class="k">fi
  </span><span class="nv">WORKDIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">realpath</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_WORKDIR</span><span class="s2">&#34;</span> 2&gt;/dev/null<span class="si">)</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true
  </span><span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$WORKDIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;The WEBHOOK_WORKDIR &#39;</span><span class="nv">$WEBHOOK_WORKDIR</span><span class="s2">&#39; is not a directory!&#34;</span> <span class="o">&gt;</span>&amp;2
    <span class="nb">exit </span>1
  <span class="k">fi</span>
  <span class="c"># Get TOKENS, if the DU_TOKEN or HARDLINK_TOKEN is defined that is used, if</span>
  <span class="c"># not if the COMMON_TOKEN that is used and in other case no token is checked</span>
  <span class="c"># (that is the default)</span>
  <span class="nv">DU_TOKEN</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">DU_TOKEN</span><span class="k">:-</span><span class="nv">$COMMON_TOKEN</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="nv">HARDLINK_TOKEN</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">HARDLINK_TOKEN</span><span class="k">:-</span><span class="nv">$COMMON_TOKEN</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="nv">S3_TOKEN</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">S3_TOKEN</span><span class="k">:-</span><span class="nv">$COMMON_TOKEN</span><span class="k">}</span><span class="s2">&#34;</span>
  <span class="c"># Create webhook configuration</span>
  <span class="o">{</span> 
    print_du_yml
    print_token_yml <span class="s2">&#34;</span><span class="nv">$DU_TOKEN</span><span class="s2">&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;&#34;</span>
    print_hardlink_yml
    print_token_yml <span class="s2">&#34;</span><span class="nv">$HARDLINK_TOKEN</span><span class="s2">&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;&#34;</span>
    print_s3sync_yml
    print_token_yml <span class="s2">&#34;</span><span class="nv">$S3_TOKEN</span><span class="s2">&#34;</span>
  <span class="o">}&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_YML</span><span class="s2">&#34;</span>
  <span class="c"># Run the webhook command</span>
  <span class="c"># shellcheck disable=SC2086</span>
  <span class="nb">exec </span>webhook <span class="nt">-hooks</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_YML</span><span class="s2">&#34;</span> <span class="nv">$WEBHOOK_OPTS</span>
<span class="o">}</span>
<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>
<span class="k">case</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span> <span class="k">in</span>
<span class="s2">&#34;server&#34;</span><span class="p">)</span> exec_webhook <span class="p">;;</span>
<span class="k">*</span><span class="p">)</span> <span class="nb">exec</span> <span class="s2">&#34;</span><span class="nv">$@</span><span class="s2">&#34;</span> <span class="p">;;</span>
<span class="k">esac</span></code></pre></figure>
<p>The <code>entrypoint.sh</code> script generates the configuration file for the <code>webhook</code>
server calling functions that print a <code>yaml</code> section for each <code>hook</code> and
optionally adds rules to validate access to them comparing the value of a
<code>X-Webhook-Token</code> header against predefined values.</p>
<p>The expected token values are taken from environment variables, we can define
a token variable for each <code>hook</code> (<code>DU_TOKEN</code>, <code>HARDLINK_TOKEN</code> or <code>S3_TOKEN</code>)
and a fallback value (<code>COMMON_TOKEN</code>); if no token variable is defined for a
<code>hook</code> no check is done and everybody can call it.</p>
<p>The <a href="https://github.com/adnanh/webhook/blob/master/docs/Hook-Definition.md">Hook
Definition</a> documentation explains the options you can use for each <code>hook</code>, the
ones we have right now do the following:</p>
<div class="ulist"><ul><li><code>du</code>: runs on the <code>$WORKDIR</code> directory, passes as first argument to the
script the value of the <code>path</code> query parameter and sets the variable
<code>OUTPUT_FORMAT</code> to the fixed value <code>json</code> (we use that to print the output of
the script in JSON format instead of text).</li><li><code>hardlink</code>: runs on the <code>$WORKDIR</code> directory and takes no parameters.</li><li><code>s3sync</code>: runs on the <code>$WORKDIR</code> directory and sets a lot of environment
variables from values read from the JSON encoded payload sent by the caller
(all the values must be sent by the caller even if they are assigned an empty
value, if they are missing the <code>hook</code> fails without calling the script); we
also set the <code>stream-command-output</code> value to <code>true</code> to make the script show
its output as it is working (we patched the <code>webhook</code> source to be able to
use this option).</li></ul></div>
<section class="doc-section level-3"><h4 id="_the_du_hook_script">The <code>du</code> hook script</h4><p>The <code>du</code> hook script code checks if the argument passed is a directory,
computes its size using the <code>du</code> command and prints the results in text format
or as a JSON dictionary:</p>
<figure class="listing-block"><figcaption>hooks/du.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
<span class="nb">set</span> <span class="nt">-e</span>
<span class="c"># Script to print disk usage for a PATH inside the scs folder</span>
<span class="c"># ---------</span>
<span class="c"># FUNCTIONS</span>
<span class="c"># ---------</span>
print_error<span class="o">()</span> <span class="o">{</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$OUTPUT_FORMAT</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;json&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;{</span><span class="se">\&#34;</span><span class="s2">error</span><span class="se">\&#34;</span><span class="s2">:</span><span class="se">\&#34;</span><span class="nv">$*</span><span class="se">\&#34;</span><span class="s2">}&#34;</span>
  <span class="k">else
    </span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$*</span><span class="s2">&#34;</span> <span class="o">&gt;</span>&amp;2
  <span class="k">fi
  </span><span class="nb">exit </span>1
<span class="o">}</span>
usage<span class="o">()</span> <span class="o">{</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$OUTPUT_FORMAT</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;json&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;{</span><span class="se">\&#34;</span><span class="s2">error</span><span class="se">\&#34;</span><span class="s2">:</span><span class="se">\&#34;</span><span class="s2">Pass arguments as &#39;?path=XXX</span><span class="se">\&#34;</span><span class="s2">}&#34;</span>
  <span class="k">else
    </span><span class="nb">echo</span> <span class="s2">&#34;Usage: </span><span class="si">$(</span><span class="nb">basename</span> <span class="s2">&#34;</span><span class="nv">$0</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2"> PATH&#34;</span> <span class="o">&gt;</span>&amp;2
  <span class="k">fi
  </span><span class="nb">exit </span>1
<span class="o">}</span>
<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;$#&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>usage
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;.&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nv">DU_PATH</span><span class="o">=</span><span class="s2">&#34;./&#34;</span>
<span class="k">else
  </span><span class="nv">DU_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>find <span class="nb">.</span> <span class="nt">-name</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span> <span class="nt">-mindepth</span> 1 <span class="nt">-maxdepth</span> 1<span class="si">)</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true
</span><span class="k">fi
if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$DU_PATH</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$DU_PATH</span><span class="s2">/.&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>print_error <span class="s2">&#34;The provided PATH (&#39;</span><span class="nv">$1</span><span class="s2">&#39;) is not a directory&#34;</span>
<span class="k">fi</span>
<span class="c"># Print disk usage in bytes for the given PATH</span>
<span class="nv">OUTPUT</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">du</span> <span class="nt">-b</span> <span class="nt">-s</span> <span class="s2">&#34;</span><span class="nv">$DU_PATH</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$OUTPUT_FORMAT</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;json&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
  <span class="c"># Format output as {&#34;path&#34;:&#34;PATH&#34;,&#34;bytes&#34;:&#34;BYTES&#34;}</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$OUTPUT</span><span class="s2">&#34;</span> |
    <span class="nb">sed</span> <span class="nt">-e</span> <span class="s2">&#34;s%^</span><span class="se">\(</span><span class="s2">.*</span><span class="se">\)\t</span><span class="s2">.*/</span><span class="se">\(</span><span class="s2">.*</span><span class="se">\)</span><span class="nv">$%</span><span class="s2">{</span><span class="se">\&#34;</span><span class="s2">path</span><span class="se">\&#34;</span><span class="s2">:</span><span class="se">\&#34;\2\&#34;</span><span class="s2">,</span><span class="se">\&#34;</span><span class="s2">bytes</span><span class="se">\&#34;</span><span class="s2">:</span><span class="se">\&#34;\1\&#34;</span><span class="s2">}%&#34;</span> |
    <span class="nb">tr</span> <span class="nt">-d</span> <span class="s1">&#39;\n&#39;</span>
<span class="k">else</span>
  <span class="c"># Print du output as is</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$OUTPUT</span><span class="s2">&#34;</span>
<span class="k">fi</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure></section>
<section class="doc-section level-3"><h4 id="_the_hardlink_hook_script">The <code>hardlink</code> hook script</h4><p>The <code>hardlink</code> hook script is really simple, it just runs the
<a href="https://github.com/util-linux/util-linux/" target="_blank" rel="noopener">util-linux</a> version of the
<a href="https://github.com/util-linux/util-linux/blob/master/misc-utils/hardlink.1.adoc" target="_blank" rel="noopener">hardlink</a>
command on its working directory:</p>
<figure class="listing-block"><figcaption>hooks/hardlink.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>
hardlink <span class="nt">--ignore-time</span> <span class="nt">--maximize</span> .</code></pre></figure>
<p>We use that to reduce the size of the stored content; to manage versions of
files and folders we keep each version on a separate directory and when one or
more files are not changed this script makes them hardlinks to the same file on
disc, reducing the space used on disk.</p></section>
<section class="doc-section level-3"><h4 id="_the_s3sync_hook_script">The <code>s3sync</code> hook script</h4><p>The <code>s3sync</code> hook script uses the <a href="https://github.com/s3fs-fuse/s3fs-fuse">s3fs</a>
tool to mount a bucket and synchronise data between a folder inside the bucket
and a directory on the filesystem using <code>rsync</code>; all values needed to execute
the task are taken from environment variables:</p>
<figure class="listing-block"><figcaption>hooks/s3sync.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/ash</span>
<span class="nb">set</span> <span class="nt">-euo</span> pipefail
<span class="nb">set</span> <span class="nt">-o</span> errexit
<span class="nb">set</span> <span class="nt">-o</span> errtrace
<span class="c"># Functions</span>
finish<span class="o">()</span> <span class="o">{</span>
  <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
  <span class="nb">echo</span> <span class="s2">&#34;&#34;</span>
  <span class="nb">echo</span> <span class="s2">&#34;Script exit code: </span><span class="nv">$ret</span><span class="s2">&#34;</span>
  <span class="nb">exit</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span>
<span class="o">}</span>
<span class="c"># Check variables</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$AWS_KEY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$AWS_SECRET_KEY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$S3_BUCKET</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span>
  <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$S3_PATH</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$SCS_PATH</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$AWS_KEY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">&#34;Set the AWS_KEY environment variable&#34;</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$AWS_SECRET_KEY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">&#34;Set the AWS_SECRET_KEY environment variable&#34;</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$S3_BUCKET</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">&#34;Set the S3_BUCKET environment variable&#34;</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$S3_PATH</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">&#34;Set the S3_PATH environment variable&#34;</span>
  <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$SCS_PATH</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">&#34;Set the SCS_PATH environment variable&#34;</span>
  finish 1
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$S3_REGION</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$S3_REGION</span><span class="s2">&#34;</span> <span class="o">!=</span> <span class="s2">&#34;us-east-1&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nv">EP_URL</span><span class="o">=</span><span class="s2">&#34;endpoint=</span><span class="nv">$S3_REGION</span><span class="s2">,url=https://s3.</span><span class="nv">$S3_REGION</span><span class="s2">.amazonaws.com&#34;</span>
<span class="k">else
  </span><span class="nv">EP_URL</span><span class="o">=</span><span class="s2">&#34;endpoint=us-east-1&#34;</span>
<span class="k">fi</span>
<span class="c"># Prepare working directory</span>
<span class="nv">WORK_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">mktemp</span> <span class="nt">-p</span> <span class="s2">&#34;</span><span class="nv">$HOME</span><span class="s2">&#34;</span> <span class="nt">-d</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">MNT_POINT</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WORK_DIR</span><span class="s2">/s3data&#34;</span>
<span class="nv">PASSWD_S3FS</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WORK_DIR</span><span class="s2">/.passwd-s3fs&#34;</span>
<span class="c"># Check the moutpoint</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">mkdir</span> <span class="nt">-p</span> <span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">&#34;</span>
<span class="k">elif </span>mountpoint <span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;There is already something mounted on &#39;</span><span class="nv">$MNT_POINT</span><span class="s2">&#39;, aborting!&#34;</span>
  finish 1
<span class="k">fi</span>
<span class="c"># Create password file</span>
<span class="nb">touch</span> <span class="s2">&#34;</span><span class="nv">$PASSWD_S3FS</span><span class="s2">&#34;</span>
<span class="nb">chmod </span>0400 <span class="s2">&#34;</span><span class="nv">$PASSWD_S3FS</span><span class="s2">&#34;</span>
<span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$AWS_KEY</span><span class="s2">:</span><span class="nv">$AWS_SECRET_KEY</span><span class="s2">&#34;</span> <span class="o">&gt;</span><span class="s2">&#34;</span><span class="nv">$PASSWD_S3FS</span><span class="s2">&#34;</span>
<span class="c"># Mount s3 bucket as a filesystem</span>
s3fs <span class="nt">-o</span> <span class="nv">dbglevel</span><span class="o">=</span>info,retries<span class="o">=</span>5 <span class="nt">-o</span> <span class="s2">&#34;</span><span class="nv">$EP_URL</span><span class="s2">&#34;</span> <span class="nt">-o</span> <span class="s2">&#34;passwd_file=</span><span class="nv">$PASSWD_S3FS</span><span class="s2">&#34;</span> <span class="se">\</span>
  <span class="s2">&#34;</span><span class="nv">$S3_BUCKET</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">&#34;</span>
<span class="nb">echo</span> <span class="s2">&#34;Mounted bucket &#39;</span><span class="nv">$S3_BUCKET</span><span class="s2">&#39; on &#39;</span><span class="nv">$MNT_POINT</span><span class="s2">&#39;&#34;</span>
<span class="c"># Remove the password file, just in case</span>
<span class="nb">rm</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$PASSWD_S3FS</span><span class="s2">&#34;</span>
<span class="c"># Check source PATH</span>
<span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;0&#34;</span>
<span class="nv">SRC_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">/</span><span class="nv">$S3_PATH</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$SRC_PATH</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;The S3_PATH &#39;</span><span class="nv">$S3_PATH</span><span class="s2">&#39; can&#39;t be found!&#34;</span>
  <span class="nv">ret</span><span class="o">=</span>1
<span class="k">fi</span>
<span class="c"># Compute SCS_UID &amp; SCS_GID (by default based on the working directory owner)</span>
<span class="nv">SCS_UID</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">SCS_UID</span>:<span class="p">=</span><span class="si">$(</span><span class="nb">stat</span> <span class="nt">-c</span> <span class="s2">&#34;%u&#34;</span> <span class="s2">&#34;.&#34;</span> 2&gt;/dev/null<span class="si">)</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true
</span><span class="nv">SCS_GID</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">SCS_GID</span>:<span class="p">=</span><span class="si">$(</span><span class="nb">stat</span> <span class="nt">-c</span> <span class="s2">&#34;%g&#34;</span> <span class="s2">&#34;.&#34;</span> 2&gt;/dev/null<span class="si">)</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true</span>
<span class="c"># Check destination PATH</span>
<span class="nv">DST_PATH</span><span class="o">=</span><span class="s2">&#34;./</span><span class="nv">$SCS_PATH</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$DST_PATH</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">mkdir</span> <span class="nt">-p</span> <span class="s2">&#34;</span><span class="nv">$DST_PATH</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">fi</span>
<span class="c"># Copy using rsync</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>rsync <span class="nt">-rlptv</span> <span class="nt">--chown</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$SCS_UID</span><span class="s2">:</span><span class="nv">$SCS_GID</span><span class="s2">&#34;</span> <span class="nt">--delete</span> <span class="nt">--stats</span> <span class="se">\</span>
    <span class="s2">&#34;</span><span class="nv">$SRC_PATH</span><span class="s2">/&#34;</span> <span class="s2">&#34;</span><span class="nv">$DST_PATH</span><span class="s2">/&#34;</span> <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">fi</span>
<span class="c"># Unmount the S3 bucket</span>
umount <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">&#34;</span>
<span class="nb">echo</span> <span class="s2">&#34;Called umount for &#39;</span><span class="nv">$MNT_POINT</span><span class="s2">&#39;&#34;</span>
<span class="c"># Remove mount point dir</span>
<span class="nb">rmdir</span> <span class="s2">&#34;</span><span class="nv">$MNT_POINT</span><span class="s2">&#34;</span>
<span class="c"># Remove WORK_DIR</span>
<span class="nb">rmdir</span> <span class="s2">&#34;</span><span class="nv">$WORK_DIR</span><span class="s2">&#34;</span>
<span class="c"># We are done</span>
finish <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure></section></section></section>
<section class="doc-section level-1"><h2 id="_deployment_objects">Deployment objects</h2><p>The system is deployed as a <code>StatefulSet</code> with one replica.</p>
<p>Our production deployment is done on <a href="https://aws.amazon.com/" target="_blank" rel="noopener">AWS</a> and to be
able to scale we use <a href="https://aws.amazon.com/efs/" target="_blank" rel="noopener">EFS</a> for our
<code>PersistenVolume</code>; the idea is that the volume has no size limit, its
<code>AccessMode</code> can be set to <code>ReadWriteMany</code> and we can mount it from multiple
instances of the Pod without issues, even if they are in different availability
zones.</p>
<p>For development we use <a href="https://k3d.io/">k3d</a> and we are also able to scale the
<code>StatefulSet</code> for testing because we use a <code>ReadWriteOnce</code> PVC, but it points
to a <code>hostPath</code> that is backed up by a folder that is mounted on all the
compute nodes, so in reality Pods in different <code>k3d</code> nodes use the same folder
on the host.</p>
<section class="doc-section level-2"><h3 id="_secrets_yaml">secrets.yaml</h3><p>The secrets file contains the files used by the <code>mysecureshell</code> container that
can be generated using kubernetes pods as follows (we are only creating the
<code>scs</code> user):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl run <span class="s2">&#34;mysecureshell&#34;</span> <span class="nt">--restart</span><span class="o">=</span><span class="s1">&#39;Never&#39;</span> <span class="nt">--quiet</span> <span class="nt">--rm</span> <span class="nt">--stdin</span> <span class="se">\</span>
<span class="gp">  --image &#34;stodh/mysecureshell:latest&#34; -- gen-host-keys &gt;</span><span class="s2">&#34;./host_keys.txt&#34;</span>
<span class="gp">$</span><span class="w"> </span>kubectl run <span class="s2">&#34;mysecureshell&#34;</span> <span class="nt">--restart</span><span class="o">=</span><span class="s1">&#39;Never&#39;</span> <span class="nt">--quiet</span> <span class="nt">--rm</span> <span class="nt">--stdin</span> <span class="se">\</span>
<span class="gp">  --image &#34;stodh/mysecureshell:latest&#34; -- gen-users-tar scs &gt;</span><span class="s2">&#34;./users.tar&#34;</span></code></pre></div>
<p>Once we have the files we can generate the <code>secrets.yaml</code> file as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">tar </span>xf ./users.tar user_keys.txt user_pass.txt
<span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">--dry-run</span><span class="o">=</span>client <span class="nt">-o</span> yaml create secret generic <span class="s2">&#34;scs-secret&#34;</span> <span class="se">\</span>
<span class="go">  --from-file=&#34;host_keys.txt=host_keys.txt&#34; \
  --from-file=&#34;user_keys.txt=user_keys.txt&#34; \
</span><span class="gp">  --from-file=&#34;user_pass.txt=user_pass.txt&#34; &gt;</span><span class="w"> </span>./secrets.yaml</code></pre></div>
<p>The resulting <code>secrets.yaml</code> will look like the following file (the <code>base64</code>
would match the content of the files, of course):</p>
<figure class="listing-block"><figcaption>secrets.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">data</span><span class="pi">:</span>
  <span class="na">host_keys.txt</span><span class="pi">:</span> <span class="s">TWlt...</span>
  <span class="na">user_keys.txt</span><span class="pi">:</span> <span class="s">c2Nz...</span>
  <span class="na">user_pass.txt</span><span class="pi">:</span> <span class="s">c2Nz...</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Secret</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">creationTimestamp</span><span class="pi">:</span> <span class="kc">null</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-secret</span></code></pre></figure></section>
<section class="doc-section level-2"><h3 id="_pvc_yaml">pvc.yaml</h3><p>The persistent volume claim for a simple deployment (one with only one instance
of the <code>statefulSet</code>) can be as simple as this:</p>
<figure class="listing-block"><figcaption>pvc.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolumeClaim</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-pvc</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">accessModes</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">ReadWriteOnce</span>
  <span class="na">resources</span><span class="pi">:</span>
    <span class="na">requests</span><span class="pi">:</span>
      <span class="na">storage</span><span class="pi">:</span> <span class="s">8Gi</span></code></pre></figure>
<p>On this definition we don’t set the <code>storageClassName</code> to use the default one.</p>
<section class="doc-section level-3"><h4 id="_volumes_in_our_development_environment_k3d">Volumes in our development environment (k3d)</h4><p>In our development deployment we create the following <code>PersistentVolume</code> as
required by the
<a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner">Local
Persistence Volume Static Provisioner</a> (note that the <code>/volumes/scs-pv</code> has to
be created by hand, in our <code>k3d</code> system we mount the same host directory on the
<code>/volumes</code> path of all the nodes and create the <code>scs-pv</code> directory by hand
before deploying the persistent volume):</p>
<figure class="listing-block"><figcaption>k3d-pv.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolume</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-pv</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">capacity</span><span class="pi">:</span>
    <span class="na">storage</span><span class="pi">:</span> <span class="s">8Gi</span>
  <span class="na">volumeMode</span><span class="pi">:</span> <span class="s">Filesystem</span>
  <span class="na">accessModes</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">ReadWriteOnce</span>
  <span class="na">persistentVolumeReclaimPolicy</span><span class="pi">:</span> <span class="s">Delete</span>
  <span class="na">claimRef</span><span class="pi">:</span>
    <span class="na">name</span><span class="pi">:</span> <span class="s">scs-pvc</span>
  <span class="na">storageClassName</span><span class="pi">:</span> <span class="s">local-storage</span>
  <span class="na">local</span><span class="pi">:</span>
    <span class="na">path</span><span class="pi">:</span> <span class="s">/volumes/scs-pv</span>
  <span class="na">nodeAffinity</span><span class="pi">:</span>
    <span class="na">required</span><span class="pi">:</span>
      <span class="na">nodeSelectorTerms</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">matchExpressions</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">key</span><span class="pi">:</span> <span class="s">node.kubernetes.io/instance-type</span>
          <span class="na">operator</span><span class="pi">:</span> <span class="s">In</span>
          <span class="na">values</span><span class="pi">:</span>
          <span class="pi">-</span> <span class="s">k3s</span></code></pre></figure>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>The <code>nodeAffinity</code> section is required but in practice the current definition
selects all <code>k3d</code> nodes.</p></aside>
<p>And to make sure that everything works as expected we update the PVC definition
to add the right <code>storageClassName</code>:</p>
<figure class="listing-block"><figcaption>k3d-pvc.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolumeClaim</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-pvc</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">accessModes</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">ReadWriteOnce</span>
  <span class="na">resources</span><span class="pi">:</span>
    <span class="na">requests</span><span class="pi">:</span>
      <span class="na">storage</span><span class="pi">:</span> <span class="s">8Gi</span>
  <span class="na">storageClassName</span><span class="pi">:</span> <span class="s">local-storage</span></code></pre></figure></section>
<section class="doc-section level-3"><h4 id="_volumes_in_our_production_environment_aws">Volumes in our production environment (aws)</h4><p>In the production deployment we don’t create the <code>PersistentVolume</code> (we are
using the
<a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver">aws-efs-csi-driver</a> which
supports <em>Dynamic Provisioning</em>) but we add the <code>storageClassName</code> (we set it
to the one mapped to the <code>EFS</code> driver, i.e. <code>efs-sc</code>) and set <code>ReadWriteMany</code>
as the <code>accessMode</code>:</p>
<figure class="listing-block"><figcaption>efs-pvc.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">PersistentVolumeClaim</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-pvc</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">accessModes</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="s">ReadWriteMany</span>
  <span class="na">resources</span><span class="pi">:</span>
    <span class="na">requests</span><span class="pi">:</span>
      <span class="na">storage</span><span class="pi">:</span> <span class="s">8Gi</span>
  <span class="na">storageClassName</span><span class="pi">:</span> <span class="s">efs-sc</span></code></pre></figure></section></section>
<section class="doc-section level-2"><h3 id="_statefulset_yaml">statefulset.yaml</h3><p>The definition of the <code>statefulSet</code> is as follows:</p>
<figure class="listing-block"><figcaption>statefulset.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">apps/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">StatefulSet</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">serviceName</span><span class="pi">:</span> <span class="s">scs</span>
  <span class="na">replicas</span><span class="pi">:</span> <span class="m">1</span>
  <span class="na">selector</span><span class="pi">:</span>
    <span class="na">matchLabels</span><span class="pi">:</span>
      <span class="na">app</span><span class="pi">:</span> <span class="s">scs</span>
  <span class="na">template</span><span class="pi">:</span>
    <span class="na">metadata</span><span class="pi">:</span>
      <span class="na">labels</span><span class="pi">:</span>
        <span class="na">app</span><span class="pi">:</span> <span class="s">scs</span>
    <span class="na">spec</span><span class="pi">:</span>
      <span class="na">containers</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">nginx</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">stodh/nginx-scs:latest</span>
        <span class="na">ports</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="m">80</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">http</span>
        <span class="na">env</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">AUTH_REQUEST_URI</span>
          <span class="na">value</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">&#34;</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">HTML_ROOT</span>
          <span class="na">value</span><span class="pi">:</span> <span class="s">/sftp/data</span>
        <span class="na">volumeMounts</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">mountPath</span><span class="pi">:</span> <span class="s">/sftp</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">scs-datadir</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">mysecureshell</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">stodh/mysecureshell:latest</span>
        <span class="na">ports</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="m">22</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">ssh</span>
        <span class="na">securityContext</span><span class="pi">:</span>
          <span class="na">capabilities</span><span class="pi">:</span>
            <span class="na">add</span><span class="pi">:</span>
            <span class="pi">-</span> <span class="s">IPC_OWNER</span>
        <span class="na">env</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">SFTP_UID</span>
          <span class="na">value</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">2020&#39;</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">SFTP_GID</span>
          <span class="na">value</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">2020&#39;</span>
        <span class="na">volumeMounts</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">mountPath</span><span class="pi">:</span> <span class="s">/secrets</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">scs-file-secrets</span>
          <span class="na">readOnly</span><span class="pi">:</span> <span class="kc">true</span>
        <span class="pi">-</span> <span class="na">mountPath</span><span class="pi">:</span> <span class="s">/sftp</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">scs-datadir</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">webhook</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">stodh/webhook-scs:latest</span>
        <span class="na">securityContext</span><span class="pi">:</span>
          <span class="na">privileged</span><span class="pi">:</span> <span class="kc">true</span>
        <span class="na">ports</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">containerPort</span><span class="pi">:</span> <span class="m">9000</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">webhook-http</span>
        <span class="na">env</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">WEBHOOK_WORKDIR</span>
          <span class="na">value</span><span class="pi">:</span> <span class="s">/sftp/data/scs</span>
        <span class="na">volumeMounts</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">devfuse</span>
          <span class="na">mountPath</span><span class="pi">:</span> <span class="s">/dev/fuse</span>
        <span class="pi">-</span> <span class="na">mountPath</span><span class="pi">:</span> <span class="s">/sftp</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">scs-datadir</span>
      <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">devfuse</span>
        <span class="na">hostPath</span><span class="pi">:</span>
          <span class="na">path</span><span class="pi">:</span> <span class="s">/dev/fuse</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">scs-file-secrets</span>
        <span class="na">secret</span><span class="pi">:</span>
          <span class="na">secretName</span><span class="pi">:</span> <span class="s">scs-secrets</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">scs-datadir</span>
        <span class="na">persistentVolumeClaim</span><span class="pi">:</span>
          <span class="na">claimName</span><span class="pi">:</span> <span class="s">scs-pvc</span></code></pre></figure>
<p>Notes about the containers:</p>
<div class="ulist"><ul><li><code>nginx</code>: As this is an example the web server is not using an
<code>AUTH_REQUEST_URI</code> and uses the <code>/sftp/data</code> directory as the root of the web
(to get to the files uploaded for the <code>scs</code> user we will need to use <code>/scs/</code>
as a prefix on the URLs).</li><li><code>mysecureshell</code>: We are adding the <code>IPC_OWNER</code> capability to the container to
be able to use some of the <code>sftp-*</code> commands inside it, but they are
not really needed, so adding the capability is optional.</li><li><code>webhook</code>: We are launching this container in privileged mode to be able to
use the <code>s3fs-fuse</code>, as it will not work otherwise for now (see this
<a href="https://github.com/kubernetes/kubernetes/issues/7890">kubernetes issue</a>); if
the functionality is not needed the container can be executed with regular
privileges; besides, as we are not enabling public access to this service we
don’t define <code>*_TOKEN</code> variables (if required the values should be read from a
<code>Secret</code> object).</li></ul></div>
<p>Notes about the volumes:</p>
<div class="ulist"><ul><li>the <code>devfuse</code> volume is only needed if we plan to use the <code>s3fs</code> command on
the <code>webhook</code> container, if not we can remove the volume definition and its
mounts.</li></ul></div></section>
<section class="doc-section level-2"><h3 id="_service_yaml">service.yaml</h3><p>To be able to access the different services on the statefulset we publish the
relevant ports using the following <code>Service</code> object:</p>
<figure class="listing-block"><figcaption>service.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Service</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-svc</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">ports</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">ssh</span>
    <span class="na">port</span><span class="pi">:</span> <span class="m">22</span>
    <span class="na">protocol</span><span class="pi">:</span> <span class="s">TCP</span>
    <span class="na">targetPort</span><span class="pi">:</span> <span class="m">22</span>
  <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">http</span>
    <span class="na">port</span><span class="pi">:</span> <span class="m">80</span>
    <span class="na">protocol</span><span class="pi">:</span> <span class="s">TCP</span>
    <span class="na">targetPort</span><span class="pi">:</span> <span class="m">80</span>
  <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">webhook-http</span>
    <span class="na">port</span><span class="pi">:</span> <span class="m">9000</span>
    <span class="na">protocol</span><span class="pi">:</span> <span class="s">TCP</span>
    <span class="na">targetPort</span><span class="pi">:</span> <span class="m">9000</span>
  <span class="na">selector</span><span class="pi">:</span>
    <span class="na">app</span><span class="pi">:</span> <span class="s">scs</span></code></pre></figure></section>
<section class="doc-section level-2"><h3 id="_ingress_yaml">ingress.yaml</h3><p>To download the <code>scs</code> files from the outside we can add an ingress object like
the following (the definition is for testing using the <code>localhost</code> name):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">networking.k8s.io/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Ingress</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">scs-ingress</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">app.kubernetes.io/name</span><span class="pi">:</span> <span class="s">scs</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">ingressClassName</span><span class="pi">:</span> <span class="s">nginx</span>
  <span class="na">rules</span><span class="pi">:</span>
  <span class="pi">-</span> <span class="na">host</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">localhost&#39;</span>
    <span class="na">http</span><span class="pi">:</span>
      <span class="na">paths</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">path</span><span class="pi">:</span> <span class="s">/scs</span>
        <span class="na">pathType</span><span class="pi">:</span> <span class="s">Prefix</span>
        <span class="na">backend</span><span class="pi">:</span>
          <span class="na">service</span><span class="pi">:</span>
            <span class="na">name</span><span class="pi">:</span> <span class="s">scs-svc</span>
            <span class="na">port</span><span class="pi">:</span>
              <span class="na">number</span><span class="pi">:</span> <span class="s">80</span></code></pre></div></section></section>
<section class="doc-section level-1"><h2 id="_deployment">Deployment</h2><p>To deploy the <code>statefulSet</code> we create a namespace and apply the object
definitions shown before:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl create namespace scs-demo
<span class="go">namespace/scs-demo created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> secrets.yaml
<span class="go">secret/scs-secrets created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> pvc.yaml
<span class="go">persistentvolumeclaim/scs-pvc created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> statefulset.yaml
<span class="go">statefulset.apps/scs created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> service.yaml
<span class="go">service/scs-svc created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> ingress.yaml
<span class="go">ingress.networking.k8s.io/scs-ingress created</span></code></pre></div>
<p>Once the objects are deployed we can check that all is working using <code>kubectl</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl  <span class="nt">-n</span> scs-demo get all,secrets,ingress
<span class="go">NAME        READY   STATUS    RESTARTS   AGE
pod/scs-0   3/3     Running   0          24s

NAME            TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)                  AGE
</span><span class="gp">service/scs-svc ClusterIP  10.43.0.47  &lt;none&gt;</span><span class="w">       </span>22/TCP,80/TCP,9000/TCP   21s
<span class="go">
NAME                   READY   AGE
statefulset.apps/scs   1/1     24s

NAME                         TYPE                                  DATA   AGE
secret/default-token-mwcd7   kubernetes.io/service-account-token   3      53s
secret/scs-secrets           Opaque                                3      39s

NAME                                   CLASS  HOSTS      ADDRESS     PORTS   AGE
ingress.networking.k8s.io/scs-ingress  nginx  localhost  172.21.0.5  80      17s</span></code></pre></div>
<p>At this point we are ready to use the system.</p></section>
<section class="doc-section level-1"><h2 id="_usage_examples">Usage examples</h2><section class="doc-section level-2"><h3 id="_file_uploads">File uploads</h3><p>As previously mentioned in our system the idea is to use the <code>sftp</code> server from
other Pods, but to test the system we are going to do a <code>kubectl port-forward</code>
and connect to the server using our host client and the password we have
generated (it is on the <code>user_pass.txt</code> file, inside the <code>users.tar</code> archive):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo port-forward service/scs-svc 2020:22 &amp;
<span class="gp">Forwarding from 127.0.0.1:2020 -&gt;</span><span class="w"> </span>22
<span class="gp">Forwarding from [::1]:2020 -&gt;</span><span class="w"> </span>22
<span class="gp">$</span><span class="w"> </span><span class="nv">PF_PID</span><span class="o">=</span><span class="nv">$!</span>
<span class="gp">$</span><span class="w"> </span>sftp <span class="nt">-P</span> 2020 scs@127.0.0.1                                                 <b class="conum">1</b>
<span class="go">Handling connection for 2020
The authenticity of host &#39;[127.0.0.1]:2020 ([127.0.0.1]:2020)&#39; can&#39;t be \
  established.
ED25519 key fingerprint is SHA256:eHNwCnyLcSSuVXXiLKeGraw0FT/4Bb/yjfqTstt+088.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added &#39;[127.0.0.1]:2020&#39; (ED25519) to the list of known \
  hosts.
scs@127.0.0.1&#39;s password: **********
Connected to 127.0.0.1.
</span><span class="gp">sftp&gt;</span><span class="w"> </span><span class="nb">ls</span> <span class="nt">-la</span>
<span class="go">drwxr-xr-x    2 sftp     sftp         4096 Sep 25 14:47 .
dr-xr-xr-x    3 sftp     sftp         4096 Sep 25 14:36 ..
</span><span class="gp">sftp&gt;</span><span class="w"> </span><span class="o">!</span><span class="nb">date</span> <span class="nt">-R</span> <span class="o">&gt;</span> /tmp/date.txt                                               <b class="conum">2</b>
<span class="gp">sftp&gt;</span><span class="w"> </span>put /tmp/date.txt <span class="nb">.</span>
<span class="go">Uploading /tmp/date.txt to /date.txt
date.txt                                      100%   32    27.8KB/s   00:00
</span><span class="gp">sftp&gt;</span><span class="w"> </span><span class="nb">ls</span> <span class="nt">-l</span>
<span class="go">-rw-r--r--    1 sftp     sftp           32 Sep 25 15:21 date.txt
</span><span class="gp">sftp&gt;</span><span class="w"> </span><span class="nb">ln </span>date.txt date.txt.1                                                 <b class="conum">3</b>
<span class="gp">sftp&gt;</span><span class="w"> </span><span class="nb">ls</span> <span class="nt">-l</span>
<span class="go">-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt
-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt.1
</span><span class="gp">sftp&gt;</span><span class="w"> </span>put /tmp/date.txt date.txt.2                                           <b class="conum">4</b>
<span class="go">Uploading /tmp/date.txt to /date.txt.2
date.txt                                      100%   32    27.8KB/s   00:00
</span><span class="gp">sftp&gt;</span><span class="w"> </span><span class="nb">ls</span> <span class="nt">-l</span>                                                                  <b class="conum">5</b>
<span class="go">-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt
-rw-r--r--    2 sftp     sftp           32 Sep 25 15:21 date.txt.1
-rw-r--r--    1 sftp     sftp           32 Sep 25 15:21 date.txt.2
</span><span class="gp">sftp&gt;</span><span class="w"> </span><span class="nb">exit</span>
<span class="gp">$</span><span class="w"> </span><span class="nb">kill</span> <span class="s2">&#34;</span><span class="nv">$PF_PID</span><span class="s2">&#34;</span>
<span class="go">[1]  + terminated  kubectl -n scs-demo port-forward service/scs-svc 2020:22</span></code></pre><ol class="callout-list arabic"><li>We connect to the <code>sftp</code> service on the forwarded port with the <code>scs</code> user.</li><li>We put a file we have created on the host on the directory.</li><li>We do a hard link of the uploaded file.</li><li>We put a second copy of the file we created locally.</li><li>On the file list we can see that the two first files have two hardlinks</li></ol></div>
</section>
<section class="doc-section level-2"><h3 id="_file_retrievals">File retrievals</h3><p>If our ingress is configured right we can download the <code>date.txt</code> file from the
URL <a class="bare" href="http://localhost/scs/date.txt">http://localhost/scs/date.txt</a>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>curl <span class="nt">-s</span> http://localhost/scs/date.txt
<span class="go">Sun, 25 Sep 2022 17:21:51 +0200</span></code></pre></div></section>
<section class="doc-section level-2"><h3 id="_use_of_the_webhook_container">Use of the webhook container</h3><p>To finish this post we are going to show how we can call the <code>hooks</code> directly,
from a <code>CronJob</code> and from a <code>Job</code>.</p>
<section class="doc-section level-3"><h4 id="_direct_script_call_du">Direct script call (<code>du</code>)</h4><p>In our deployment the direct calls are done from other Pods, to simulate it we
are going to do a <code>port-forward</code> and call the script with an existing PATH (the
root directory) and a bad one:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo port-forward service/scs-svc 9000:9000 <span class="o">&gt;</span>/dev/null &amp;
<span class="gp">$</span><span class="w"> </span><span class="nv">PF_PID</span><span class="o">=</span><span class="nv">$!</span>
<span class="gp">$</span><span class="w"> </span><span class="nv">JSON</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>curl <span class="nt">-s</span> <span class="s2">&#34;http://localhost:9000/hooks/du?path=.&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="gp">$</span><span class="w"> </span><span class="nb">echo</span> <span class="nv">$JSON</span>
<span class="go">{&#34;path&#34;:&#34;&#34;,&#34;bytes&#34;:&#34;4160&#34;}
</span><span class="gp">$</span><span class="w"> </span><span class="nv">JSON</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>curl <span class="nt">-s</span> <span class="s2">&#34;http://localhost:9000/hooks/du?path=foo&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="gp">$</span><span class="w"> </span><span class="nb">echo</span> <span class="nv">$JSON</span>
<span class="go">{&#34;error&#34;:&#34;The provided PATH (&#39;foo&#39;) is not a directory&#34;}
</span><span class="gp">$</span><span class="w"> </span><span class="nb">kill</span> <span class="nv">$PF_PID</span></code></pre></div>
<p>As we only have files on the base directory we print the disk usage of the <code>.</code>
PATH and the output is in <code>json</code> format because we export <code>OUTPUT_FORMAT</code> with
the value <code>json</code> on the <code>webhook</code> configuration.</p></section>
<section class="doc-section level-3"><h4 id="_cronjobs_hardlink">Cronjobs (<code>hardlink</code>)</h4><p>As explained before, the <code>webhook</code> container can be used to run <code>cronjobs</code>; the
following one uses an <code>alpine</code> container to call the <code>hardlink</code> script each
minute (that setup is for testing, obviously):</p>
<figure class="listing-block"><figcaption>webhook-cronjob.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">batch/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">CronJob</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">hardlink</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">cronjob</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">hardlink&#39;</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">schedule</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">*</span><span class="nv"> </span><span class="s">*/1</span><span class="nv"> </span><span class="s">*</span><span class="nv"> </span><span class="s">*</span><span class="nv"> </span><span class="s">*&#34;</span>
  <span class="na">concurrencyPolicy</span><span class="pi">:</span> <span class="s">Replace</span>
  <span class="na">jobTemplate</span><span class="pi">:</span>
    <span class="na">spec</span><span class="pi">:</span>
      <span class="na">template</span><span class="pi">:</span>
        <span class="na">metadata</span><span class="pi">:</span>
          <span class="na">labels</span><span class="pi">:</span>
            <span class="na">cronjob</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">hardlink&#39;</span>
        <span class="na">spec</span><span class="pi">:</span>
          <span class="na">containers</span><span class="pi">:</span>
          <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">hardlink-cronjob</span>
            <span class="na">image</span><span class="pi">:</span> <span class="s">alpine:latest</span>
            <span class="na">command</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">&#34;</span><span class="s">wget&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">-q&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">-O-&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">http://scs-svc:9000/hooks/hardlink&#34;</span><span class="pi">]</span>
          <span class="na">restartPolicy</span><span class="pi">:</span> <span class="s">Never</span></code></pre></figure>
<p>The following console session shows how we create the object, allow a couple of
executions and remove it (in production we keep it running but once a day, not
each minute):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> webhook-cronjob.yaml                          <b class="conum">1</b>
<span class="go">cronjob.batch/hardlink created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo get pods <span class="nt">-l</span> <span class="s2">&#34;cronjob=hardlink&#34;</span> <span class="nt">-w</span>                      <b class="conum">2</b>
<span class="go">NAME                      READY   STATUS    RESTARTS   AGE
hardlink-27735351-zvpnb   0/1     Pending   0          0s
hardlink-27735351-zvpnb   0/1     ContainerCreating   0          0s
hardlink-27735351-zvpnb   0/1     Completed           0          2s
^C
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo logs pod/hardlink-27735351-zvpnb                       <b class="conum">3</b>
<span class="go">Mode:                     real
Method:                   sha256
Files:                    3
Linked:                   1 files
Compared:                 0 xattrs
Compared:                 1 files
Saved:                    32 B
Duration:                 0.000220 seconds
</span><span class="gp">$</span><span class="w"> </span><span class="nb">sleep </span>60
<span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo get pods <span class="nt">-l</span> <span class="s2">&#34;cronjob=hardlink&#34;</span>                         <b class="conum">4</b>
<span class="go">NAME                      READY   STATUS      RESTARTS   AGE
hardlink-27735351-zvpnb   0/1     Completed   0          83s
hardlink-27735352-br5rn   0/1     Completed   0          23s
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo logs pod/hardlink-27735352-br5rn                       <b class="conum">5</b>
<span class="go">Mode:                     real
Method:                   sha256
Files:                    3
Linked:                   0 files
Compared:                 0 xattrs
Compared:                 0 files
Saved:                    0 B
Duration:                 0.000070 seconds
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo delete <span class="nt">-f</span> webhook-cronjob.yaml                         <b class="conum">6</b>
<span class="go">cronjob.batch &#34;hardlink&#34; deleted</span></code></pre><ol class="callout-list arabic"><li>This command creates the cronjob object.</li><li>This checks the pods with our <code>cronjob</code> label, we interrupt it once we see
that the first run has been completed.</li><li>With this command we see the output of the execution, as this is the fist
execution we see that <code>date.txt.2</code> has been replaced by a hardlink (the
summary does not name the file, but it is the only option knowing the
contents from the original upload).</li><li>After waiting a little bit we check the pods executed again to get the name
of the latest one.</li><li>The log now shows that nothing was done.</li><li>As this is a demo, we delete the cronjob.</li></ol></div>
</section></section>
<section class="doc-section level-2"><h3 id="_jobs_s3sync">Jobs (<code>s3sync</code>)</h3><p>The following job can be used to synchronise the contents of a directory in a
S3 bucket with the SCS Filesystem:</p>
<figure class="listing-block"><figcaption>job.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">batch/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Job</span>
<span class="na">metadata</span><span class="pi">:</span>
  <span class="na">name</span><span class="pi">:</span> <span class="s">s3sync</span>
  <span class="na">labels</span><span class="pi">:</span>
    <span class="na">cronjob</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">s3sync&#39;</span>
<span class="na">spec</span><span class="pi">:</span>
  <span class="na">template</span><span class="pi">:</span>
    <span class="na">metadata</span><span class="pi">:</span>
      <span class="na">labels</span><span class="pi">:</span>
        <span class="na">cronjob</span><span class="pi">:</span> <span class="s1">&#39;</span><span class="s">s3sync&#39;</span>
    <span class="na">spec</span><span class="pi">:</span>
      <span class="na">containers</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">s3sync-job</span>
        <span class="na">image</span><span class="pi">:</span> <span class="s">alpine:latest</span>
        <span class="na">command</span><span class="pi">:</span> 
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">wget&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">-q&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">--header&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">Content-Type:</span><span class="nv"> </span><span class="s">application/json&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">--post-file&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">/secrets/s3sync.json&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">-O-&#34;</span>
        <span class="pi">-</span> <span class="s2">&#34;</span><span class="s">http://scs-svc:9000/hooks/s3sync&#34;</span>
        <span class="na">volumeMounts</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">mountPath</span><span class="pi">:</span> <span class="s">/secrets</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">job-secrets</span>
          <span class="na">readOnly</span><span class="pi">:</span> <span class="kc">true</span>
      <span class="na">restartPolicy</span><span class="pi">:</span> <span class="s">Never</span>
      <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">job-secrets</span>
        <span class="na">secret</span><span class="pi">:</span>
          <span class="na">secretName</span><span class="pi">:</span> <span class="s">webhook-job-secrets</span></code></pre></figure>
<p>The file with parameters for the script must be something like this:</p>
<figure class="listing-block"><figcaption>s3sync.json</figcaption>
<pre class="rouge highlight"><code data-lang="json"><span class="p">{</span><span class="w">
  </span><span class="nl">&#34;aws&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">&#34;key&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;********************&#34;</span><span class="p">,</span><span class="w">
    </span><span class="nl">&#34;secret_key&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;****************************************&#34;</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">&#34;s3&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">&#34;region&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;eu-north-1&#34;</span><span class="p">,</span><span class="w">
    </span><span class="nl">&#34;bucket&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;blogops-test&#34;</span><span class="p">,</span><span class="w">
    </span><span class="nl">&#34;path&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;test&#34;</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">&#34;scs&#34;</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">&#34;path&#34;</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;test&#34;</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span></code></pre></figure>
<p>Once we have both files we can run the Job as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo create secret generic webhook-job-secrets <span class="se">\ </span>           <b class="conum">1</b>
<span class="go">  --from-file=&#34;s3sync.json=s3sync.json&#34;
secret/webhook-job-secrets created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo apply <span class="nt">-f</span> webhook-job.yaml                              <b class="conum">2</b>
<span class="go">job.batch/s3sync created
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo get pods <span class="nt">-l</span> <span class="s2">&#34;cronjob=s3sync&#34;</span>                           <b class="conum">3</b>
<span class="go">NAME           READY   STATUS      RESTARTS   AGE
s3sync-zx2cj   0/1     Completed   0          12s
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo logs s3sync-zx2cj                                      <b class="conum">4</b>
<span class="go">Mounted bucket &#39;s3fs-test&#39; on &#39;/root/tmp.jiOjaF/s3data&#39;
sending incremental file list
created directory ./test
./
kyso.png

Number of files: 2 (reg: 1, dir: 1)
Number of created files: 2 (reg: 1, dir: 1)
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 15,075 bytes
Total transferred file size: 15,075 bytes
Literal data: 15,075 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.147 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 15,183
Total bytes received: 74

sent 15,183 bytes  received 74 bytes  30,514.00 bytes/sec
total size is 15,075  speedup is 0.99
Called umount for &#39;/root/tmp.jiOjaF/s3data&#39;

Script exit code: 0
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo delete <span class="nt">-f</span> webhook-job.yaml                             <b class="conum">5</b>
<span class="go">job.batch &#34;s3sync&#34; deleted
</span><span class="gp">$</span><span class="w"> </span>kubectl <span class="nt">-n</span> scs-demo delete secrets webhook-job-secrets                     <b class="conum">6</b>
<span class="go">secret &#34;webhook-job-secrets&#34; deleted</span></code></pre><ol class="callout-list arabic"><li>Here we create the <code>webhook-job-secrets</code> secret that contains the
<code>s3sync.json</code> file.</li><li>This command runs the job.</li><li>Checking the label <code>cronjob=s3sync</code> we get the Pods executed by the job.</li><li>Here we print the logs of the completed job.</li><li>Once we are finished we remove the Job.</li><li>And also the secret.</li></ol></div>
</section></section>
<section class="doc-section level-1"><h2 id="_final_remarks">Final remarks</h2><p>This post has been longer than I expected, but I believe it can be useful for
someone; in any case, next time I’ll try to explain something shorter or will
split it into multiple entries.</p></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>Using Git Server Hooks on GitLab CE to Validate Tags</title>
      <link>https://blogops.mixinet.net/posts/using_git_server_hooks/</link>
      <pubDate>Mon, 01 Aug 2022 13:00:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/using_git_server_hooks/</guid>
      <description>&lt;p&gt;Since a long time ago I’ve been a
&lt;a href=&#34;https://about.gitlab.com/install/?version=ce&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;gitlab-ce&lt;/a&gt; user, in fact I’ve set
it up on three of the last four companies I’ve worked for (initially I installed
it using the &lt;a href=&#34;https://docs.gitlab.com/omnibus/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;omnibus&lt;/a&gt; packages on a
&lt;a href=&#34;https://www.debian.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;debian&lt;/a&gt; server but on the last two places I moved to
the &lt;a href=&#34;https://docs.gitlab.com/ee/install/docker.html&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;docker&lt;/a&gt; based installation,
as it is easy to maintain and we don’t need a big installation as the teams
using it are small).&lt;/p&gt;
&lt;p&gt;On the company I work for now (&lt;a href=&#34;https://about.kyso.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;kyso&lt;/a&gt;) we are using it
to host all our internal repositories and to do all the CI/CD work (the
automatic deployments are triggered by web hooks in some cases, but the rest is
all done using &lt;a href=&#34;https://about.gitlab.com/gitlab-ci/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;gitlab-ci&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;The majority of projects are using &lt;a href=&#34;https://nodejs.dev/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;nodejs&lt;/a&gt; as programming
language and we have automated the publication of &lt;code&gt;npm&lt;/code&gt; packages on our gitlab
instance &lt;a href=&#34;https://docs.gitlab.com/ee/user/packages/npm_registry/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;npm registry&lt;/a&gt;
and even the publication into the &lt;a href=&#34;https://www.npmjs.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;npmjs&lt;/a&gt; registry.&lt;/p&gt;
&lt;p&gt;To publish the packages we have added rules to the &lt;code&gt;gitlab-ci&lt;/code&gt; configuration of
the relevant repositories and we publish them when a &lt;code&gt;tag&lt;/code&gt; is created.&lt;/p&gt;
&lt;p&gt;As the we are &lt;em&gt;lazy&lt;/em&gt; by definition, I configured the system to use the &lt;code&gt;tag&lt;/code&gt; as
the package version; I tested if the contents of the &lt;code&gt;package.json&lt;/code&gt; where in
sync with the expected version and if it was not I updated it and did a force
push of the &lt;code&gt;tag&lt;/code&gt; with the updated file using the following code on the script
that publishes the package:&lt;/p&gt;
&lt;div class=&#34;listing-block&#34;&gt;&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;sh&#34;&gt;&lt;span class=&#34;c&#34;&gt;# Update package version &amp;amp; add it to the .build-args&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;INITIAL_PACKAGE_VERSION&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;$(&lt;/span&gt;npm pkg get version|tr &lt;span class=&#34;nt&#34;&gt;-d&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&amp;#34;&amp;#39;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
npm version &lt;span class=&#34;nt&#34;&gt;--allow-same&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;--no-commit-hooks&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;--no-git-tag-version&lt;/span&gt; &lt;span class=&#34;se&#34;&gt;\&lt;/span&gt;
  &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CI_COMMIT_TAG&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nv&#34;&gt;UPDATED_PACKAGE_VERSION&lt;/span&gt;&lt;span class=&#34;o&#34;&gt;=&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;$(&lt;/span&gt;npm pkg get version|tr &lt;span class=&#34;nt&#34;&gt;-d&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&amp;#34;&amp;#39;&lt;/span&gt;&lt;span class=&#34;si&#34;&gt;)&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;UPDATED_PACKAGE_VERSION=&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$UPDATED_PACKAGE_VERSION&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;&amp;gt;&amp;gt;&lt;/span&gt; .build-args
&lt;span class=&#34;c&#34;&gt;# Update tag if the version was updated or abort&lt;/span&gt;
&lt;span class=&#34;k&#34;&gt;if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$INITIAL_PACKAGE_VERSION&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;!=&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$UPDATED_PACKAGE_VERSION&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
  if&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-n&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CI_GIT_USER&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;[&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-n&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CI_GIT_TOKEN&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;o&#34;&gt;]&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;;&lt;/span&gt; &lt;span class=&#34;k&#34;&gt;then
    &lt;/span&gt;git commit &lt;span class=&#34;nt&#34;&gt;-m&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;Updated version from tag &lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CI_COMMIT_TAG&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; package.json
    git tag &lt;span class=&#34;nt&#34;&gt;-f&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CI_COMMIT_TAG&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-m&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;Updated version from tag&amp;#34;&lt;/span&gt;
    git push &lt;span class=&#34;nt&#34;&gt;-f&lt;/span&gt; &lt;span class=&#34;nt&#34;&gt;-o&lt;/span&gt; ci.skip origin &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt;$CI_COMMIT_TAG&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;k&#34;&gt;else
    &lt;/span&gt;&lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;!!! ERROR !!!&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;The updated tag could not be uploaded.&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;Set CI_GIT_USER and CI_GIT_TOKEN or fix the &amp;#39;package.json&amp;#39; file&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;nb&#34;&gt;echo&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;!!! ERROR !!!&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;nb&#34;&gt;exit &lt;/span&gt;1
  &lt;span class=&#34;k&#34;&gt;fi
fi&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>Since a long time ago I’ve been a
<a href="https://about.gitlab.com/install/?version=ce" target="_blank" rel="noopener">gitlab-ce</a> user, in fact I’ve set
it up on three of the last four companies I’ve worked for (initially I installed
it using the <a href="https://docs.gitlab.com/omnibus/" target="_blank" rel="noopener">omnibus</a> packages on a
<a href="https://www.debian.org/" target="_blank" rel="noopener">debian</a> server but on the last two places I moved to
the <a href="https://docs.gitlab.com/ee/install/docker.html" target="_blank" rel="noopener">docker</a> based installation,
as it is easy to maintain and we don’t need a big installation as the teams
using it are small).</p>
<p>On the company I work for now (<a href="https://about.kyso.io/" target="_blank" rel="noopener">kyso</a>) we are using it
to host all our internal repositories and to do all the CI/CD work (the
automatic deployments are triggered by web hooks in some cases, but the rest is
all done using <a href="https://about.gitlab.com/gitlab-ci/" target="_blank" rel="noopener">gitlab-ci</a>).</p>
<p>The majority of projects are using <a href="https://nodejs.dev/" target="_blank" rel="noopener">nodejs</a> as programming
language and we have automated the publication of <code>npm</code> packages on our gitlab
instance <a href="https://docs.gitlab.com/ee/user/packages/npm_registry/" target="_blank" rel="noopener">npm registry</a>
and even the publication into the <a href="https://www.npmjs.com/" target="_blank" rel="noopener">npmjs</a> registry.</p>
<p>To publish the packages we have added rules to the <code>gitlab-ci</code> configuration of
the relevant repositories and we publish them when a <code>tag</code> is created.</p>
<p>As the we are <em>lazy</em> by definition, I configured the system to use the <code>tag</code> as
the package version; I tested if the contents of the <code>package.json</code> where in
sync with the expected version and if it was not I updated it and did a force
push of the <code>tag</code> with the updated file using the following code on the script
that publishes the package:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="sh"><span class="c"># Update package version &amp; add it to the .build-args</span>
<span class="nv">INITIAL_PACKAGE_VERSION</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>npm pkg get version|tr <span class="nt">-d</span> <span class="s1">&#39;&#34;&#39;</span><span class="si">)</span><span class="s2">&#34;</span>
npm version <span class="nt">--allow-same</span> <span class="nt">--no-commit-hooks</span> <span class="nt">--no-git-tag-version</span> <span class="se">\</span>
  <span class="s2">&#34;</span><span class="nv">$CI_COMMIT_TAG</span><span class="s2">&#34;</span>
<span class="nv">UPDATED_PACKAGE_VERSION</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>npm pkg get version|tr <span class="nt">-d</span> <span class="s1">&#39;&#34;&#39;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nb">echo</span> <span class="s2">&#34;UPDATED_PACKAGE_VERSION=</span><span class="nv">$UPDATED_PACKAGE_VERSION</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span> .build-args
<span class="c"># Update tag if the version was updated or abort</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$INITIAL_PACKAGE_VERSION</span><span class="s2">&#34;</span> <span class="o">!=</span> <span class="s2">&#34;</span><span class="nv">$UPDATED_PACKAGE_VERSION</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">&#34;</span><span class="nv">$CI_GIT_USER</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">&#34;</span><span class="nv">$CI_GIT_TOKEN</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span>git commit <span class="nt">-m</span> <span class="s2">&#34;Updated version from tag </span><span class="nv">$CI_COMMIT_TAG</span><span class="s2">&#34;</span> package.json
    git tag <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$CI_COMMIT_TAG</span><span class="s2">&#34;</span> <span class="nt">-m</span> <span class="s2">&#34;Updated version from tag&#34;</span>
    git push <span class="nt">-f</span> <span class="nt">-o</span> ci.skip origin <span class="s2">&#34;</span><span class="nv">$CI_COMMIT_TAG</span><span class="s2">&#34;</span>
  <span class="k">else
    </span><span class="nb">echo</span> <span class="s2">&#34;!!! ERROR !!!&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;The updated tag could not be uploaded.&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;Set CI_GIT_USER and CI_GIT_TOKEN or fix the &#39;package.json&#39; file&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;!!! ERROR !!!&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi
fi</span></code></pre></div>
<p>This feels a little dirty (we are leaving commits on the tag but not updating
the original branch); I thought about trying to find the <code>branch</code> using the
<code>tag</code> and update it, but I drop the idea pretty soon as there were multiple
issues to consider (i.e. we can have tags pointing to commits present in
multiple branches and even if it only points to one the <code>tag</code> does not have to
be the <code>HEAD</code> of the branch making the inclusion difficult).</p>
<p>In any case this system was working, so we left it until we started to publish
to the NPM Registry; as we are using a <em>token</em> to <em>push</em> the packages that we
don’t want all developers to have access to (right now it would not matter, but
when the team grows it will) I started to use gitlab
<a href="https://docs.gitlab.com/ee/user/project/protected_branches.html" target="_blank" rel="noopener">protected
branches</a> on the projects that need it and adjusting the <code>.npmrc</code> file using
<a href="https://docs.gitlab.com/ee/ci/variables/#protected-cicd-variables" target="_blank" rel="noopener">protected
variables</a>.</p>
<p>The problem then was that we can no longer do a standard force push for a
branch (that is the main point of the <em>protected branches</em> feature) unless we
use the gitlab api, so the <em>tags</em> with the wrong version started to fail.</p>
<p>As the way things were being done seemed dirty anyway I thought that the best
way of fixing things was to forbid users to push a <code>tag</code> that includes a
version that does not match the <code>package.json</code> version.</p>
<p>After thinking about it we decided to use
<a href="https://git-scm.com/docs/githooks" target="_blank" rel="noopener">githooks</a> on the
<a href="https://docs.gitlab.com/ee/administration/server_hooks.html" target="_blank" rel="noopener">gitlab</a> server for
the repositories that need it, as we are only interested in <code>tags</code> we are going
to use the <a href="https://git-scm.com/docs/githooks#update" target="_blank" rel="noopener">update</a> <em>hook</em>; it is
executed once for each ref to be updated, and takes three parameters:</p>
<div class="ulist"><ul><li>the name of the ref being updated,</li><li>the old object name stored in the ref,</li><li>and the new object name to be stored in the ref.</li></ul></div>
<p>To install our <em>hook</em> we have found the <code>gitaly</code> relative path of each repo and
located it on the server filesystem (as I said we are using <code>docker</code> and the
gitlab’s <code>data</code> directory is on <code>/srv/gitlab/data</code>, so the path to the repo has
the form <code>/srv/gitlab/data/git-data/repositories/@hashed/xx/yy/hash.git</code>).</p>
<p>Once we have the directory we need to:</p>
<div class="ulist"><ul><li>create a <code>custom_hooks</code> sub directory inside it,</li><li>add the <code>update</code> script (as we only need one script we used that instead of
creating an <code>update.d</code> directory, the good thing is that this will also work
with a standard git server renaming the base directory to <code>hooks</code> instead of
<code>custom_hooks</code>),</li><li>make it executable, and</li><li>change the directory and file ownership to make sure it can be read and
executed from the gitlab container</li></ul></div>
<p>On a console session:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">cd</span> /srv/gitlab/data/git-data/repositories/@hashed/xx/yy/hash.git
<span class="gp">$</span><span class="w"> </span><span class="nb">mkdir </span>custom_hooks
<span class="gp">$</span><span class="w"> </span>edit_or_copy custom_hooks/update
<span class="gp">$</span><span class="w"> </span><span class="nb">chmod </span>0755 custom_hooks/update
<span class="gp">$</span><span class="w"> </span><span class="nb">chown</span> <span class="nt">--reference</span><span class="o">=</span><span class="nb">.</span> <span class="nt">-R</span> custom_hooks</code></pre></div>
<p>The <code>update</code> script we are using is as follows:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="sh"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># kyso update hook</span>
<span class="c">#</span>
<span class="c"># Right now it checks version.txt or package.json versions against the tag name</span>
<span class="c"># (it supports a &#39;v&#39; prefix on the tag)</span>

<span class="c"># Arguments</span>
<span class="nv">ref_name</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
<span class="nv">old_rev</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$2</span><span class="s2">&#34;</span>
<span class="nv">new_rev</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$3</span><span class="s2">&#34;</span>

<span class="c"># Initial test</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$ref_name</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span>  <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$old_rev</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;usage: </span><span class="nv">$0</span><span class="s2"> &lt;ref&gt; &lt;oldrev&gt; &lt;newrev&gt;&#34;</span> <span class="o">&gt;</span>&amp;2
  <span class="nb">exit </span>1
<span class="k">fi</span>

<span class="c"># Get the tag short name</span>
<span class="nv">tag_name</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">ref_name</span><span class="p">##refs/tags/</span><span class="k">}</span><span class="s2">&#34;</span>

<span class="c"># Exit if the update is not for a tag</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$tag_name</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="nv">$ref_name</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">exit </span>0
<span class="k">fi</span>

<span class="c"># Get the null rev value (string of zeros)</span>
<span class="nv">zero</span><span class="o">=</span><span class="si">$(</span>git hash-object <span class="nt">--stdin</span> &lt;/dev/null | <span class="nb">tr</span> <span class="s1">&#39;0-9a-f&#39;</span> <span class="s1">&#39;0&#39;</span><span class="si">)</span>

<span class="c"># Get if the tag is new or not</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$old_rev</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="nv">$zero</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nv">new_tag</span><span class="o">=</span><span class="s2">&#34;true&#34;</span>
<span class="k">else
  </span><span class="nv">new_tag</span><span class="o">=</span><span class="s2">&#34;false&#34;</span>
<span class="k">fi</span>

<span class="c"># Get the type of revision:</span>
<span class="c"># - delete: if the new_rev is zero</span>
<span class="c"># - commit: annotated tag</span>
<span class="c"># - tag: un-annotated tag</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="nv">$zero</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nv">new_rev_type</span><span class="o">=</span><span class="s2">&#34;delete&#34;</span>
<span class="k">else
  </span><span class="nv">new_rev_type</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>git cat-file <span class="nt">-t</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="k">fi</span>

<span class="c"># Exit if we are deleting a tag (nothing to check here)</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$new_rev_type</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;delete&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">exit </span>0
<span class="k">fi</span>

<span class="c"># Check the version against the tag (supports version.txt &amp; package.json)</span>
<span class="k">if </span>git cat-file <span class="nt">-e</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">:version.txt&#34;</span> <span class="o">&gt;</span>/dev/null 2&gt;&amp;1<span class="p">;</span> <span class="k">then
  </span><span class="nv">version</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>git cat-file <span class="nt">-p</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">:version.txt&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$version</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="nv">$tag_name</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$version</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">tag_name</span><span class="p">#v</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">exit </span>0
  <span class="k">else
    </span><span class="nv">EMSG</span><span class="o">=</span><span class="s2">&#34;tag &#39;</span><span class="nv">$tag_name</span><span class="s2">&#39; and &#39;version.txt&#39; contents &#39;</span><span class="nv">$version</span><span class="s2">&#39; don&#39;t match&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;GL-HOOK-ERR: </span><span class="nv">$EMSG</span><span class="s2">&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi
elif </span>git cat-file <span class="nt">-e</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">:package.json&#34;</span> <span class="o">&gt;</span>/dev/null 2&gt;&amp;1<span class="p">;</span> <span class="k">then
  </span><span class="nv">version</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>
    git cat-file <span class="nt">-p</span> <span class="s2">&#34;</span><span class="nv">$new_rev</span><span class="s2">:package.json&#34;</span> | jsonpath version | <span class="nb">tr</span> <span class="nt">-d</span> <span class="s1">&#39;\[\]&#34;&#39;</span>
  <span class="si">)</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$version</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="nv">$tag_name</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$version</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">tag_name</span><span class="p">#v</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">exit </span>0
  <span class="k">else
    </span><span class="nv">EMSG</span><span class="o">=</span><span class="s2">&#34;tag &#39;</span><span class="nv">$tag_name</span><span class="s2">&#39; and &#39;package.json&#39; version &#39;</span><span class="nv">$version</span><span class="s2">&#39; don&#39;t match&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;GL-HOOK-ERR: </span><span class="nv">$EMSG</span><span class="s2">&#34;</span>
    <span class="nb">exit </span>1
  <span class="k">fi
else</span>
  <span class="c"># No version.txt or package.json file found</span>
  <span class="nb">exit </span>0
<span class="k">fi</span></code></pre></div>
<p>Some comments about it:</p>
<div class="ulist"><ul><li>we are only looking for <code>tags</code>, if the <code>ref_name</code> does not have the prefix
<code>refs/tags/</code> the script does an <code>exit 0</code>,</li><li>although we are checking if the <code>tag</code> is <em>new</em> or not we are not using the
value (in <code>gitlab</code> that is handled by the <em>protected tag</em> feature),</li><li>if we are deleting a <code>tag</code> the script does an <code>exit 0</code>, we don’t need to
check anything in that case,</li><li>we are ignoring if the <code>tag</code> is annotated or not (we set the <code>new_rev_type</code> to
<code>tag</code> or <code>commit</code>, but we don’t use the value),</li><li>we test first the <code>version.txt</code> file and if it does not exist we check the
<code>package.json</code> file, if it does not exist either we do an <code>exit 0</code>, as there
is no version to check against and we allow that on a tag,</li><li>we add the <code>GL-HOOK-ERR:</code> prefix to the messages to show them on the gitlab
web interface (can be tested creating a tag from it),</li><li>to get the <code>version</code> on the <code>package.json</code> file we use the <code>jsonpath</code> binary
(it is installed by the <a href="https://rubygems.org/gems/jsonpath" target="_blank" rel="noopener">jsonpath ruby
gem</a>) because it is available on the <code>gitlab</code> container (initially I used
<code>sed</code> to get the value, but a real JSON parser is always a better option).</li></ul></div>
<p>Once the <em>hook</em> is installed when a user tries to push a <code>tag</code> to a repository
that has a <code>version.txt</code> file or <code>package.json</code> file and the tag does not match
the version (if <code>version.txt</code> is present it takes precedence) the <code>push</code> fails.</p>
<p>If the <code>tag</code> matches or the files are not present the <code>tag</code> is added if the
user has permission to add it in <code>gitlab</code> (our <em>hook</em> is only executed if the
user is allowed to create or update the <code>tag</code>).</p>
]]></content:encoded>
    </item>
    
    <item>
      <title>New Blog Config</title>
      <link>https://blogops.mixinet.net/posts/new_blog_config/</link>
      <pubDate>Fri, 27 May 2022 00:00:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/new_blog_config/</guid>
      <description>&lt;p&gt;As promised, on this post I’m going to explain how I’ve configured this blog
using &lt;code&gt;hugo&lt;/code&gt;, &lt;code&gt;asciidoctor&lt;/code&gt; and the &lt;code&gt;papermod&lt;/code&gt; theme, how I publish it using
&lt;code&gt;nginx&lt;/code&gt;, how I’ve integrated the &lt;code&gt;remark42&lt;/code&gt; comment system and how I’ve
automated its publication using &lt;code&gt;forgejo&lt;/code&gt; and &lt;code&gt;json2file-go&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It is a long post, but I hope that at least parts of it can be interesting for
some, feel free to ignore it if that is not your case …​ :wink:&lt;/p&gt;
&lt;section class=&#34;doc-section level-1&#34;&gt;&lt;h2 id=&#34;_hugo_configuration&#34;&gt;Hugo Configuration&lt;/h2&gt;&lt;section class=&#34;doc-section level-2&#34;&gt;&lt;h3 id=&#34;_theme_settings&#34;&gt;Theme settings&lt;/h3&gt;&lt;p&gt;The site is using the &lt;a href=&#34;https://git.io/hugopapermod&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;PaperMod&lt;/a&gt; theme and as I’m
using &lt;a href=&#34;https://asciidoctor.org&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;asciidoctor&lt;/a&gt; to publish my content I’ve adjusted
the settings to improve how things are shown with it.&lt;/p&gt;
&lt;p&gt;The current &lt;code&gt;config.yml&lt;/code&gt; file is the one shown below (probably some of the
settings are not required nor being used right now, but I’m including the
current file, so this post will have always the latest version of it):&lt;/p&gt;
&lt;figure class=&#34;listing-block&#34;&gt;&lt;figcaption&gt;config.yml&lt;/figcaption&gt;
&lt;pre class=&#34;rouge highlight&#34;&gt;&lt;code data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;na&#34;&gt;baseURL&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;https://blogops.mixinet.net/&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;title&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Mixinet BlogOps&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;Copyright&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;|&lt;/span&gt;
  &lt;span class=&#34;s&#34;&gt;&amp;amp;copy; 2022-2025 Sergio Talens-Oliag.&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;pagination&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;pagerSize&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;5&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;theme&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;PaperMod&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;destination&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;public/&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;enableInlineShortcodes&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;enableRobotsTXT&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;buildDrafts&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;buildFuture&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;buildExpired&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;enableEmoji&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;pygmentsUseClasses&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;minify&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;disableXML&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;minifyOutput&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;languages&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;en&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;languageName&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;English&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;description&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Mixinet&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;BlogOps&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;-&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://blogops.mixinet.net/&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;author&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Sergio&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Talens-Oliag&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;weight&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;1&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;title&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Mixinet BlogOps&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;params&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;homeInfoParams&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
        &lt;span class=&#34;na&#34;&gt;Title&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Sergio&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Talens-Oliag&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Technical&lt;/span&gt;&lt;span class=&#34;nv&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s&#34;&gt;Blog&amp;#34;&lt;/span&gt;
        &lt;span class=&#34;na&#34;&gt;Content&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;&amp;gt;&lt;/span&gt;
          &lt;span class=&#34;s&#34;&gt;![Mixinet BlogOps](/images/mixinet-blogops.png)&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;taxonomies&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;category&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;categories&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;tag&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;tags&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;series&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;series&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;menu&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;main&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
        &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Archive&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;archives&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;weight&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;5&lt;/span&gt;
        &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Categories&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;categories/&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;weight&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;10&lt;/span&gt;
        &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Tags&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;tags/&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;weight&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;10&lt;/span&gt;
        &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Search&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;search/&lt;/span&gt;
          &lt;span class=&#34;na&#34;&gt;weight&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;15&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;outputs&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;home&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;HTML&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;RSS&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;JSON&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;params&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;author&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Sergio Talens-Oliag&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;env&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;production&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;defaultTheme&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;light&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;disableThemeToggle&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowShareButtons&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowReadingTime&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;disableSpecial1stPost&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;disableHLJS&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;displayFullLangName&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowPostNavLinks&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowBreadCrumbs&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowCodeCopyButtons&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowRssButtonInSectionTermList&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowFullTextinRSS&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;ShowToc&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;TocOpen&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;comments&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;remark42SiteID&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;blogops&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;remark42Url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://blogops.mixinet.net/remark42&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;profileMode&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;enabled&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;title&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Sergio Talens-Oliag Technical Blog&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;imageUrl&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;/images/mixinet-blogops.png&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;imageTitle&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Mixinet BlogOps&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;buttons&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Archives&lt;/span&gt;
        &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;archives&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Categories&lt;/span&gt;
        &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;categories&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Tags&lt;/span&gt;
        &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;tags&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;social&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;socialIcons&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;CV&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://www.uv.es/~sto/cv/&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Debian&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://people.debian.org/~sto/&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;GitHub&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://github.com/sto/&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;GitLab&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://gitlab.com/stalens/&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;Linkedin&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;https://www.linkedin.com/in/sergio-talens-oliag/&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;na&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;RSS&lt;/span&gt;
      &lt;span class=&#34;na&#34;&gt;url&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;index.xml&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;assets&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disableHLJS&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;favicon&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;/favicon.ico&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;favicon16x16&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;  &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;/favicon-16x16.png&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;favicon32x32&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;  &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;/favicon-32x32.png&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;apple_touch_icon&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;  &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;/apple-touch-icon.png&amp;#34;&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;safari_pinned_tab&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;  &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;/safari-pinned-tab.svg&amp;#34;&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;fuseOpts&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;isCaseSensitive&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;shouldSort&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;location&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;0&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;distance&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;1000&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;threshold&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;0.4&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;minMatchCharLength&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;m&#34;&gt;0&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;keys&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;title&amp;#34;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;permalink&amp;#34;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;summary&amp;#34;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;,&lt;/span&gt; &lt;span class=&#34;s2&#34;&gt;&amp;#34;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;content&amp;#34;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;]&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;markup&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;asciidocExt&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;attributes&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;{&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;allow-uri-read&amp;#39;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;}&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;backend&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;html5s&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;extensions&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;pi&#34;&gt;[&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;asciidoctor-html5s&amp;#39;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;,&lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;asciidoctor-diagram&amp;#39;&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;]&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;failureLevel&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;fatal&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;noHeaderOrFooter&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;preserveTOC&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;safeMode&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;s&#34;&gt;unsafe&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;sectionNumbers&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;trace&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;verbose&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;workingFolderCurrent&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;privacy&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;vimeo&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disabled&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;simple&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disabled&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;enableDNT&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;simple&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;instagram&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disabled&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;simple&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;youtube&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disabled&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;false&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;privacyEnhanced&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;services&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;instagram&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disableInlineCSS&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;disableInlineCSS&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt; &lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;
&lt;span class=&#34;na&#34;&gt;security&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
  &lt;span class=&#34;na&#34;&gt;exec&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
    &lt;span class=&#34;na&#34;&gt;allow&lt;/span&gt;&lt;span class=&#34;pi&#34;&gt;:&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;^asciidoctor$&amp;#39;&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;^dart-sass-embedded$&amp;#39;&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;^go$&amp;#39;&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;^npx$&amp;#39;&lt;/span&gt;
      &lt;span class=&#34;pi&#34;&gt;-&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&#34;s&#34;&gt;^postcss$&amp;#39;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;
&lt;p&gt;Some notes about the settings:&lt;/p&gt;
&lt;div class=&#34;ulist&#34;&gt;&lt;ul&gt;&lt;li&gt;&lt;code&gt;disableHLJS&lt;/code&gt; and &lt;code&gt;assets.disableHLJS&lt;/code&gt; are set to &lt;code&gt;true&lt;/code&gt;; we plan to use
&lt;code&gt;rouge&lt;/code&gt; on &lt;code&gt;adoc&lt;/code&gt; and the inclusion of the &lt;code&gt;hljs&lt;/code&gt; assets adds &lt;em&gt;styles&lt;/em&gt; that
collide with the ones used by &lt;code&gt;rouge&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;&lt;code&gt;ShowToc&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt; and the &lt;code&gt;TocOpen&lt;/code&gt; setting is set to &lt;code&gt;false&lt;/code&gt; to
make the ToC appear collapsed initially. My plan was to use the &lt;code&gt;asciidoctor&lt;/code&gt;
ToC, but after trying I believe that the theme one looks nice and I don’t
need to adjust styles, although it has some issues with the &lt;code&gt;html5s&lt;/code&gt;
processor (the admonition titles use &lt;code&gt;&amp;lt;h6&amp;gt;&lt;/code&gt; and they are shown on the ToC,
which is weird), to fix it I’ve copied the &lt;code&gt;layouts/partial/toc.html&lt;/code&gt; to my
site repository and replaced the range of headings to end at &lt;code&gt;5&lt;/code&gt; instead of
&lt;code&gt;6&lt;/code&gt; (in fact &lt;code&gt;5&lt;/code&gt; still seems a lot, but as I don’t think I’ll use that heading
level on the posts it doesn’t really matter).&lt;/li&gt;&lt;li&gt;&lt;code&gt;params.profileMode&lt;/code&gt; values are adjusted, but for now I’ve left it disabled
setting &lt;code&gt;params.profileMode.enabled&lt;/code&gt; to &lt;code&gt;false&lt;/code&gt; and I’ve set the
&lt;code&gt;homeInfoParams&lt;/code&gt; to show more or less the same content with the latest posts
under it (I’ve added some styles to my &lt;code&gt;custom.css&lt;/code&gt; style sheet to center the
text and image of the first post to match the look and feel of the profile).&lt;/li&gt;&lt;li&gt;On the &lt;code&gt;asciidocExt&lt;/code&gt; section I’ve adjusted the &lt;code&gt;backend&lt;/code&gt; to use &lt;code&gt;html5s&lt;/code&gt;,
I’ve added the &lt;code&gt;asciidoctor-html5s&lt;/code&gt; and &lt;code&gt;asciidoctor-diagram&lt;/code&gt; extensions to
&lt;code&gt;asciidoctor&lt;/code&gt; and adjusted the &lt;code&gt;workingFolderCurrent&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; to make
&lt;code&gt;asciidoctor-diagram&lt;/code&gt; work right (haven’t tested it yet).&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description>
      <content:encoded><![CDATA[<p>As promised, on this post I’m going to explain how I’ve configured this blog
using <code>hugo</code>, <code>asciidoctor</code> and the <code>papermod</code> theme, how I publish it using
<code>nginx</code>, how I’ve integrated the <code>remark42</code> comment system and how I’ve
automated its publication using <code>forgejo</code> and <code>json2file-go</code>.</p>
<p>It is a long post, but I hope that at least parts of it can be interesting for
some, feel free to ignore it if that is not your case …​ :wink:</p>
<section class="doc-section level-1"><h2 id="_hugo_configuration">Hugo Configuration</h2><section class="doc-section level-2"><h3 id="_theme_settings">Theme settings</h3><p>The site is using the <a href="https://git.io/hugopapermod" target="_blank" rel="noopener">PaperMod</a> theme and as I’m
using <a href="https://asciidoctor.org" target="_blank" rel="noopener">asciidoctor</a> to publish my content I’ve adjusted
the settings to improve how things are shown with it.</p>
<p>The current <code>config.yml</code> file is the one shown below (probably some of the
settings are not required nor being used right now, but I’m including the
current file, so this post will have always the latest version of it):</p>
<figure class="listing-block"><figcaption>config.yml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">baseURL</span><span class="pi">:</span> <span class="s">https://blogops.mixinet.net/</span>
<span class="na">title</span><span class="pi">:</span> <span class="s">Mixinet BlogOps</span>
<span class="na">Copyright</span><span class="pi">:</span> <span class="pi">|</span>
  <span class="s">&amp;copy; 2022-2025 Sergio Talens-Oliag.</span>
<span class="na">pagination</span><span class="pi">:</span>
  <span class="na">pagerSize</span><span class="pi">:</span> <span class="m">5</span>
<span class="na">theme</span><span class="pi">:</span> <span class="s">PaperMod</span>
<span class="na">destination</span><span class="pi">:</span> <span class="s">public/</span>
<span class="na">enableInlineShortcodes</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">enableRobotsTXT</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">buildDrafts</span><span class="pi">:</span> <span class="kc">false</span>
<span class="na">buildFuture</span><span class="pi">:</span> <span class="kc">false</span>
<span class="na">buildExpired</span><span class="pi">:</span> <span class="kc">false</span>
<span class="na">enableEmoji</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">pygmentsUseClasses</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">minify</span><span class="pi">:</span>
  <span class="na">disableXML</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">minifyOutput</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">languages</span><span class="pi">:</span>
  <span class="na">en</span><span class="pi">:</span>
    <span class="na">languageName</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">English&#34;</span>
    <span class="na">description</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">Mixinet</span><span class="nv"> </span><span class="s">BlogOps</span><span class="nv"> </span><span class="s">-</span><span class="nv"> </span><span class="s">https://blogops.mixinet.net/&#34;</span>
    <span class="na">author</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">Sergio</span><span class="nv"> </span><span class="s">Talens-Oliag&#34;</span>
    <span class="na">weight</span><span class="pi">:</span> <span class="m">1</span>
    <span class="na">title</span><span class="pi">:</span> <span class="s">Mixinet BlogOps</span>
    <span class="na">params</span><span class="pi">:</span>
      <span class="na">homeInfoParams</span><span class="pi">:</span>
        <span class="na">Title</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">Sergio</span><span class="nv"> </span><span class="s">Talens-Oliag</span><span class="nv"> </span><span class="s">Technical</span><span class="nv"> </span><span class="s">Blog&#34;</span>
        <span class="na">Content</span><span class="pi">:</span> <span class="pi">&gt;</span>
          <span class="s">![Mixinet BlogOps](/images/mixinet-blogops.png)</span>
    <span class="na">taxonomies</span><span class="pi">:</span>
      <span class="na">category</span><span class="pi">:</span> <span class="s">categories</span>
      <span class="na">tag</span><span class="pi">:</span> <span class="s">tags</span>
      <span class="na">series</span><span class="pi">:</span> <span class="s">series</span>
    <span class="na">menu</span><span class="pi">:</span>
      <span class="na">main</span><span class="pi">:</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Archive</span>
          <span class="na">url</span><span class="pi">:</span> <span class="s">archives</span>
          <span class="na">weight</span><span class="pi">:</span> <span class="m">5</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Categories</span>
          <span class="na">url</span><span class="pi">:</span> <span class="s">categories/</span>
          <span class="na">weight</span><span class="pi">:</span> <span class="m">10</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Tags</span>
          <span class="na">url</span><span class="pi">:</span> <span class="s">tags/</span>
          <span class="na">weight</span><span class="pi">:</span> <span class="m">10</span>
        <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Search</span>
          <span class="na">url</span><span class="pi">:</span> <span class="s">search/</span>
          <span class="na">weight</span><span class="pi">:</span> <span class="m">15</span>
<span class="na">outputs</span><span class="pi">:</span>
  <span class="na">home</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="s">HTML</span>
    <span class="pi">-</span> <span class="s">RSS</span>
    <span class="pi">-</span> <span class="s">JSON</span>
<span class="na">params</span><span class="pi">:</span>
  <span class="na">author</span><span class="pi">:</span> <span class="s">Sergio Talens-Oliag</span>
  <span class="na">env</span><span class="pi">:</span> <span class="s">production</span>
  <span class="na">defaultTheme</span><span class="pi">:</span> <span class="s">light</span>
  <span class="na">disableThemeToggle</span><span class="pi">:</span> <span class="kc">false</span>
  <span class="na">ShowShareButtons</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowReadingTime</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">disableSpecial1stPost</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">disableHLJS</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">displayFullLangName</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowPostNavLinks</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowBreadCrumbs</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowCodeCopyButtons</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowRssButtonInSectionTermList</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowFullTextinRSS</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">ShowToc</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">TocOpen</span><span class="pi">:</span> <span class="kc">false</span>
  <span class="na">comments</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">remark42SiteID</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">blogops&#34;</span>
  <span class="na">remark42Url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">https://blogops.mixinet.net/remark42&#34;</span>
  <span class="na">profileMode</span><span class="pi">:</span>
    <span class="na">enabled</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">title</span><span class="pi">:</span> <span class="s">Sergio Talens-Oliag Technical Blog</span>
    <span class="na">imageUrl</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">/images/mixinet-blogops.png&#34;</span>
    <span class="na">imageTitle</span><span class="pi">:</span> <span class="s">Mixinet BlogOps</span>
    <span class="na">buttons</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Archives</span>
        <span class="na">url</span><span class="pi">:</span> <span class="s">archives</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Categories</span>
        <span class="na">url</span><span class="pi">:</span> <span class="s">categories</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Tags</span>
        <span class="na">url</span><span class="pi">:</span> <span class="s">tags</span>
  <span class="na">social</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">socialIcons</span><span class="pi">:</span>
    <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">CV</span>
      <span class="na">url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">https://www.uv.es/~sto/cv/&#34;</span>
    <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Debian</span>
      <span class="na">url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">https://people.debian.org/~sto/&#34;</span>
    <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">GitHub</span>
      <span class="na">url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">https://github.com/sto/&#34;</span>
    <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">GitLab</span>
      <span class="na">url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">https://gitlab.com/stalens/&#34;</span>
    <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Linkedin</span>
      <span class="na">url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">https://www.linkedin.com/in/sergio-talens-oliag/&#34;</span>
    <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">RSS</span>
      <span class="na">url</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">index.xml&#34;</span>
  <span class="na">assets</span><span class="pi">:</span>
    <span class="na">disableHLJS</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">favicon</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">/favicon.ico&#34;</span>
    <span class="na">favicon16x16</span><span class="pi">:</span>  <span class="s2">&#34;</span><span class="s">/favicon-16x16.png&#34;</span>
    <span class="na">favicon32x32</span><span class="pi">:</span>  <span class="s2">&#34;</span><span class="s">/favicon-32x32.png&#34;</span>
    <span class="na">apple_touch_icon</span><span class="pi">:</span>  <span class="s2">&#34;</span><span class="s">/apple-touch-icon.png&#34;</span>
    <span class="na">safari_pinned_tab</span><span class="pi">:</span>  <span class="s2">&#34;</span><span class="s">/safari-pinned-tab.svg&#34;</span>
  <span class="na">fuseOpts</span><span class="pi">:</span>
    <span class="na">isCaseSensitive</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">shouldSort</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">location</span><span class="pi">:</span> <span class="m">0</span>
    <span class="na">distance</span><span class="pi">:</span> <span class="m">1000</span>
    <span class="na">threshold</span><span class="pi">:</span> <span class="m">0.4</span>
    <span class="na">minMatchCharLength</span><span class="pi">:</span> <span class="m">0</span>
    <span class="na">keys</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">&#34;</span><span class="s">title&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">permalink&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">summary&#34;</span><span class="pi">,</span> <span class="s2">&#34;</span><span class="s">content&#34;</span><span class="pi">]</span>
<span class="na">markup</span><span class="pi">:</span>
  <span class="na">asciidocExt</span><span class="pi">:</span>
    <span class="na">attributes</span><span class="pi">:</span> <span class="pi">{</span><span class="s1">&#39;</span><span class="s">allow-uri-read&#39;</span><span class="pi">:</span> <span class="nv">true</span><span class="pi">}</span>
    <span class="na">backend</span><span class="pi">:</span> <span class="s">html5s</span>
    <span class="na">extensions</span><span class="pi">:</span> <span class="pi">[</span><span class="s1">&#39;</span><span class="s">asciidoctor-html5s&#39;</span><span class="pi">,</span><span class="s1">&#39;</span><span class="s">asciidoctor-diagram&#39;</span><span class="pi">]</span>
    <span class="na">failureLevel</span><span class="pi">:</span> <span class="s">fatal</span>
    <span class="na">noHeaderOrFooter</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">preserveTOC</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">safeMode</span><span class="pi">:</span> <span class="s">unsafe</span>
    <span class="na">sectionNumbers</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">trace</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">verbose</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">workingFolderCurrent</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">privacy</span><span class="pi">:</span>
  <span class="na">vimeo</span><span class="pi">:</span>
    <span class="na">disabled</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">simple</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">x</span><span class="pi">:</span>
    <span class="na">disabled</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">enableDNT</span><span class="pi">:</span> <span class="kc">true</span>
    <span class="na">simple</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">instagram</span><span class="pi">:</span>
    <span class="na">disabled</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">simple</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">youtube</span><span class="pi">:</span>
    <span class="na">disabled</span><span class="pi">:</span> <span class="kc">false</span>
    <span class="na">privacyEnhanced</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">services</span><span class="pi">:</span>
  <span class="na">instagram</span><span class="pi">:</span>
    <span class="na">disableInlineCSS</span><span class="pi">:</span> <span class="kc">true</span>
  <span class="na">x</span><span class="pi">:</span>
    <span class="na">disableInlineCSS</span><span class="pi">:</span> <span class="kc">true</span>
<span class="na">security</span><span class="pi">:</span>
  <span class="na">exec</span><span class="pi">:</span>
    <span class="na">allow</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">^asciidoctor$&#39;</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">^dart-sass-embedded$&#39;</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">^go$&#39;</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">^npx$&#39;</span>
      <span class="pi">-</span> <span class="s1">&#39;</span><span class="s">^postcss$&#39;</span></code></pre></figure>
<p>Some notes about the settings:</p>
<div class="ulist"><ul><li><code>disableHLJS</code> and <code>assets.disableHLJS</code> are set to <code>true</code>; we plan to use
<code>rouge</code> on <code>adoc</code> and the inclusion of the <code>hljs</code> assets adds <em>styles</em> that
collide with the ones used by <code>rouge</code>.</li><li><code>ShowToc</code> is set to <code>true</code> and the <code>TocOpen</code> setting is set to <code>false</code> to
make the ToC appear collapsed initially. My plan was to use the <code>asciidoctor</code>
ToC, but after trying I believe that the theme one looks nice and I don’t
need to adjust styles, although it has some issues with the <code>html5s</code>
processor (the admonition titles use <code>&lt;h6&gt;</code> and they are shown on the ToC,
which is weird), to fix it I’ve copied the <code>layouts/partial/toc.html</code> to my
site repository and replaced the range of headings to end at <code>5</code> instead of
<code>6</code> (in fact <code>5</code> still seems a lot, but as I don’t think I’ll use that heading
level on the posts it doesn’t really matter).</li><li><code>params.profileMode</code> values are adjusted, but for now I’ve left it disabled
setting <code>params.profileMode.enabled</code> to <code>false</code> and I’ve set the
<code>homeInfoParams</code> to show more or less the same content with the latest posts
under it (I’ve added some styles to my <code>custom.css</code> style sheet to center the
text and image of the first post to match the look and feel of the profile).</li><li>On the <code>asciidocExt</code> section I’ve adjusted the <code>backend</code> to use <code>html5s</code>,
I’ve added the <code>asciidoctor-html5s</code> and <code>asciidoctor-diagram</code> extensions to
<code>asciidoctor</code> and adjusted the <code>workingFolderCurrent</code> to <code>true</code> to make
<code>asciidoctor-diagram</code> work right (haven’t tested it yet).</li></ul></div></section>
<section class="doc-section level-2"><h3 id="_theme_customisations">Theme customisations</h3><p>To write in <code>asciidoctor</code> using the <code>html5s</code> processor I’ve added some files to
the <code>assets/css/extended</code> directory:</p>
<div class="olist arabic"><ol class="arabic"><li><p>As said before, I’ve added the file <code>assets/css/extended/custom.css</code> to
make the <code>homeInfoParams</code> look like the profile page and I’ve also changed a
little bit some theme styles to make things look better with the <code>html5s</code>
output:</p><figure class="listing-block"><figcaption>custom.css</figcaption>
<pre class="rouge highlight"><code data-lang="css"><span class="c">/* Fix first entry alignment to make it look like the profile */</span>
<span class="nc">.first-entry</span> <span class="p">{</span> <span class="nl">text-align</span><span class="p">:</span> <span class="nb">center</span><span class="p">;</span> <span class="p">}</span>
<span class="nc">.first-entry</span> <span class="nt">img</span> <span class="p">{</span> <span class="nl">display</span><span class="p">:</span> <span class="nb">inline</span><span class="p">;</span> <span class="p">}</span>
<span class="c">/**
 * Remove margin for .post-content code and reduce padding to make it look
 * better with the asciidoctor html5s output.
 **/</span>
<span class="nc">.post-content</span> <span class="nt">code</span> <span class="p">{</span> <span class="nl">margin</span><span class="p">:</span> <span class="nb">auto</span> <span class="m">0</span><span class="p">;</span> <span class="nl">padding</span><span class="p">:</span> <span class="m">4px</span><span class="p">;</span> <span class="p">}</span></code></pre></figure></li><li><p>I’ve also added the file <code>assets/css/extended/adoc.css</code> with some styles
taken from the <code>asciidoctor-default.css</code>, see this
<a href="https://rgielen.net/posts/2019/creating-a-blog-with-hugo-and-asciidoctor/" target="_blank" rel="noopener">blog
post</a> about the original file; mine is the same after formatting it with
<a href="https://beautifier.io/" target="_blank" rel="noopener">css-beautify</a> and editing it to use variables for
the colors to support light and dark themes:</p><figure class="listing-block"><figcaption>adoc.css</figcaption>
<pre class="rouge highlight"><code data-lang="css"><span class="c">/* AsciiDoctor*/</span>
<span class="nt">table</span> <span class="p">{</span>
    <span class="nl">border-collapse</span><span class="p">:</span> <span class="nb">collapse</span><span class="p">;</span>
    <span class="nl">border-spacing</span><span class="p">:</span> <span class="m">0</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span><span class="o">&gt;</span><span class="nt">table</span> <span class="p">{</span>
    <span class="nl">border-collapse</span><span class="p">:</span> <span class="nb">separate</span><span class="p">;</span>
    <span class="nl">border</span><span class="p">:</span> <span class="m">0</span><span class="p">;</span>
    <span class="nl">background</span><span class="p">:</span> <span class="nb">none</span><span class="p">;</span>
    <span class="nl">width</span><span class="p">:</span> <span class="m">100%</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span><span class="o">&gt;</span><span class="nt">table</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="p">{</span>
    <span class="nl">text-align</span><span class="p">:</span> <span class="nb">center</span><span class="p">;</span>
    <span class="nl">width</span><span class="p">:</span> <span class="m">80px</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span><span class="o">&gt;</span><span class="nt">table</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nt">img</span> <span class="p">{</span>
    <span class="nl">max-width</span><span class="p">:</span> <span class="nb">none</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span><span class="o">&gt;</span><span class="nt">table</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nc">.title</span> <span class="p">{</span>
    <span class="nl">font-weight</span><span class="p">:</span> <span class="nb">bold</span><span class="p">;</span>
    <span class="nl">font-family</span><span class="p">:</span> <span class="s1">&#34;Open Sans&#34;</span><span class="p">,</span> <span class="s1">&#34;DejaVu Sans&#34;</span><span class="p">,</span> <span class="nb">sans-serif</span><span class="p">;</span>
    <span class="nl">text-transform</span><span class="p">:</span> <span class="nb">uppercase</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span><span class="o">&gt;</span><span class="nt">table</span> <span class="nt">td</span><span class="nc">.content</span> <span class="p">{</span>
    <span class="nl">padding-left</span><span class="p">:</span> <span class="m">1.125em</span><span class="p">;</span>
    <span class="nl">padding-right</span><span class="p">:</span> <span class="m">1.25em</span><span class="p">;</span>
    <span class="nl">border-left</span><span class="p">:</span> <span class="m">1px</span> <span class="nb">solid</span> <span class="m">#ddddd8</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--primary</span><span class="p">)</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span><span class="o">&gt;</span><span class="nt">table</span> <span class="nt">td</span><span class="nc">.content</span><span class="o">&gt;</span><span class="nd">:last-child</span><span class="o">&gt;</span><span class="nd">:last-child</span> <span class="p">{</span>
    <span class="nl">margin-bottom</span><span class="p">:</span> <span class="m">0</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="o">[</span><span class="nt">class</span><span class="o">^=</span><span class="s1">&#34;fa icon-&#34;</span><span class="o">]</span> <span class="p">{</span>
    <span class="nl">font-size</span><span class="p">:</span> <span class="m">2.5em</span><span class="p">;</span>
    <span class="nl">text-shadow</span><span class="p">:</span> <span class="m">1px</span> <span class="m">1px</span> <span class="m">2px</span> <span class="n">var</span><span class="p">(</span><span class="n">--secondary</span><span class="p">);</span>
    <span class="nl">cursor</span><span class="p">:</span> <span class="nb">default</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nc">.icon-note</span><span class="nd">::before</span> <span class="p">{</span>
    <span class="nl">content</span><span class="p">:</span> <span class="s1">&#34;\f05a&#34;</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--icon-note-color</span><span class="p">)</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nc">.icon-tip</span><span class="nd">::before</span> <span class="p">{</span>
    <span class="nl">content</span><span class="p">:</span> <span class="s1">&#34;\f0eb&#34;</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--icon-tip-color</span><span class="p">)</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nc">.icon-warning</span><span class="nd">::before</span> <span class="p">{</span>
    <span class="nl">content</span><span class="p">:</span> <span class="s1">&#34;\f071&#34;</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--icon-warning-color</span><span class="p">)</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nc">.icon-caution</span><span class="nd">::before</span> <span class="p">{</span>
    <span class="nl">content</span><span class="p">:</span> <span class="s1">&#34;\f06d&#34;</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--icon-caution-color</span><span class="p">)</span>
<span class="p">}</span>

<span class="nc">.admonitionblock</span> <span class="nt">td</span><span class="nc">.icon</span> <span class="nc">.icon-important</span><span class="nd">::before</span> <span class="p">{</span>
    <span class="nl">content</span><span class="p">:</span> <span class="s1">&#34;\f06a&#34;</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--icon-important-color</span><span class="p">)</span>
<span class="p">}</span>

<span class="nc">.conum</span><span class="o">[</span><span class="nt">data-value</span><span class="o">]</span> <span class="p">{</span>
    <span class="nl">display</span><span class="p">:</span> <span class="n">inline-block</span><span class="p">;</span>
    <span class="nl">color</span><span class="p">:</span> <span class="m">#fff</span> <span class="cp">!important</span><span class="p">;</span>
    <span class="nl">background-color</span><span class="p">:</span> <span class="n">rgba</span><span class="p">(</span><span class="m">100</span><span class="p">,</span> <span class="m">100</span><span class="p">,</span> <span class="m">0</span><span class="p">,</span> <span class="m">.8</span><span class="p">);</span>
    <span class="nl">-webkit-border-radius</span><span class="p">:</span> <span class="m">100px</span><span class="p">;</span>
    <span class="nl">border-radius</span><span class="p">:</span> <span class="m">100px</span><span class="p">;</span>
    <span class="nl">text-align</span><span class="p">:</span> <span class="nb">center</span><span class="p">;</span>
    <span class="nl">font-size</span><span class="p">:</span> <span class="m">.75em</span><span class="p">;</span>
    <span class="nl">width</span><span class="p">:</span> <span class="m">1.67em</span><span class="p">;</span>
    <span class="nl">height</span><span class="p">:</span> <span class="m">1.67em</span><span class="p">;</span>
    <span class="nl">line-height</span><span class="p">:</span> <span class="m">1.67em</span><span class="p">;</span>
    <span class="nl">font-family</span><span class="p">:</span> <span class="s1">&#34;Open Sans&#34;</span><span class="p">,</span> <span class="s1">&#34;DejaVu Sans&#34;</span><span class="p">,</span> <span class="nb">sans-serif</span><span class="p">;</span>
    <span class="nl">font-style</span><span class="p">:</span> <span class="nb">normal</span><span class="p">;</span>
    <span class="nl">font-weight</span><span class="p">:</span> <span class="nb">bold</span>
<span class="p">}</span>

<span class="nc">.conum</span><span class="o">[</span><span class="nt">data-value</span><span class="o">]</span> <span class="o">*</span> <span class="p">{</span>
    <span class="nl">color</span><span class="p">:</span> <span class="m">#fff</span> <span class="cp">!important</span>
<span class="p">}</span>

<span class="nc">.conum</span><span class="o">[</span><span class="nt">data-value</span><span class="o">]+</span><span class="nt">b</span> <span class="p">{</span>
    <span class="nl">display</span><span class="p">:</span> <span class="nb">none</span>
<span class="p">}</span>

<span class="nc">.conum</span><span class="o">[</span><span class="nt">data-value</span><span class="o">]</span><span class="nd">::after</span> <span class="p">{</span>
    <span class="nl">content</span><span class="p">:</span> <span class="n">attr</span><span class="p">(</span><span class="n">data-value</span><span class="p">)</span>
<span class="p">}</span>

<span class="nt">pre</span> <span class="nc">.conum</span><span class="o">[</span><span class="nt">data-value</span><span class="o">]</span> <span class="p">{</span>
    <span class="nl">position</span><span class="p">:</span> <span class="nb">relative</span><span class="p">;</span>
    <span class="nl">top</span><span class="p">:</span> <span class="m">-.125em</span>
<span class="p">}</span>

<span class="nt">b</span><span class="nc">.conum</span> <span class="o">*</span> <span class="p">{</span>
    <span class="nl">color</span><span class="p">:</span> <span class="nb">inherit</span> <span class="cp">!important</span>
<span class="p">}</span>

<span class="nc">.conum</span><span class="nd">:not</span><span class="o">([</span><span class="nt">data-value</span><span class="o">])</span><span class="nd">:empty</span> <span class="p">{</span>
    <span class="nl">display</span><span class="p">:</span> <span class="nb">none</span>
<span class="p">}</span></code></pre></figure></li><li><p>The previous file uses variables from a partial copy of the <code>theme-vars.css</code>
file that changes the highlighted code background color and adds the color
definitions used by the admonitions:</p><figure class="listing-block"><figcaption>theme-vars.css</figcaption>
<pre class="rouge highlight"><code data-lang="css"><span class="nd">:root</span> <span class="p">{</span>
    <span class="c">/* Solarized base2 */</span>
    <span class="c">/* --hljs-bg: rgb(238, 232, 213); */</span>
    <span class="c">/* Solarized base3 */</span>
    <span class="c">/* --hljs-bg: rgb(253, 246, 227); */</span>
    <span class="c">/* Solarized base02 */</span>
    <span class="py">--hljs-bg</span><span class="p">:</span> <span class="nb">rgb</span><span class="p">(</span><span class="m">7</span><span class="p">,</span> <span class="m">54</span><span class="p">,</span> <span class="m">66</span><span class="p">);</span>
    <span class="c">/* Solarized base03 */</span>
    <span class="c">/* --hljs-bg: rgb(0, 43, 54); */</span>
    <span class="c">/* Default asciidoctor theme colors */</span>
    <span class="py">--icon-note-color</span><span class="p">:</span> <span class="m">#19407c</span><span class="p">;</span>
    <span class="py">--icon-tip-color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--primary</span><span class="p">);</span>
    <span class="py">--icon-warning-color</span><span class="p">:</span> <span class="m">#bf6900</span><span class="p">;</span>
    <span class="py">--icon-caution-color</span><span class="p">:</span> <span class="m">#bf3400</span><span class="p">;</span>
    <span class="py">--icon-important-color</span><span class="p">:</span> <span class="m">#bf0000</span>
<span class="p">}</span>

<span class="nc">.dark</span> <span class="p">{</span>
    <span class="py">--hljs-bg</span><span class="p">:</span> <span class="nb">rgb</span><span class="p">(</span><span class="m">7</span><span class="p">,</span> <span class="m">54</span><span class="p">,</span> <span class="m">66</span><span class="p">);</span>
    <span class="c">/* Asciidoctor theme colors with tint for dark background */</span>
    <span class="py">--icon-note-color</span><span class="p">:</span> <span class="m">#3e7bd7</span><span class="p">;</span>
    <span class="py">--icon-tip-color</span><span class="p">:</span> <span class="n">var</span><span class="p">(</span><span class="n">--primary</span><span class="p">);</span>
    <span class="py">--icon-warning-color</span><span class="p">:</span> <span class="m">#ff8d03</span><span class="p">;</span>
    <span class="py">--icon-caution-color</span><span class="p">:</span> <span class="m">#ff7847</span><span class="p">;</span>
    <span class="py">--icon-important-color</span><span class="p">:</span> <span class="m">#ff3030</span>
<span class="p">}</span></code></pre></figure></li><li><p>The previous styles use <code>font-awesome</code>, so I’ve downloaded its resources for
version <code>4.7.0</code> (the one used by <code>asciidoctor</code>) storing the
<code>font-awesome.css</code> into on the <code>assets/css/extended</code> dir (that way it is
merged with the rest of <code>.css</code> files) and copying the fonts to the
<code>static/assets/fonts/</code> dir (will be served directly):</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nv">FA_BASE_URL</span><span class="o">=</span><span class="s2">&#34;https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0&#34;</span>
curl <span class="s2">&#34;</span><span class="nv">$FA_BASE_URL</span><span class="s2">/css/font-awesome.css&#34;</span> <span class="se">\</span>
  <span class="o">&gt;</span> assets/css/extended/font-awesome.css
<span class="k">for </span>f <span class="k">in </span>FontAwesome.otf fontawesome-webfont.eot <span class="se">\</span>
  fontawesome-webfont.svg fontawesome-webfont.ttf <span class="se">\</span>
  fontawesome-webfont.woff fontawesome-webfont.woff2<span class="p">;</span> <span class="k">do
    </span>curl <span class="s2">&#34;</span><span class="nv">$FA_BASE_URL</span><span class="s2">/fonts/</span><span class="nv">$f</span><span class="s2">&#34;</span> <span class="o">&gt;</span> <span class="s2">&#34;static/assets/fonts/</span><span class="nv">$f</span><span class="s2">&#34;</span>
<span class="k">done</span></code></pre></div></li><li><p>As already said the default highlighter is disabled (it provided a <code>css</code>
compatible with <code>rouge</code>) so we need a <code>css</code> to do the highlight styling; as
<code>rouge</code> provides a way to export them, I’ve created the
<code>assets/css/extended/rouge.css</code> file with the <code>thankful_eyes</code> theme:</p><div class="listing-block"><pre class="rouge highlight"><code data-lang="shell">rougify style thankful_eyes <span class="o">&gt;</span> assets/css/extended/rouge.css</code></pre></div></li><li><p>To support the use of the <code>html5s</code> backend with admonitions I’ve added a
variation of the example found on this
<a href="https://blog.anoff.io/2019-02-17-hugo-render-asciidoc/#_manipulating_the_dom_for_custom_css" target="_blank" rel="noopener">blog
post</a> to <code>assets/js/adoc-admonitions.js</code>:</p><figure class="listing-block"><figcaption>adoc-admonitions.js</figcaption>
<pre class="rouge highlight"><code data-lang="js"><span class="c1">// replace the default admonitions block with a table that uses a format</span>
<span class="c1">// similar to the standard asciidoctor ... as we are using fa-icons here there</span>
<span class="c1">// is no need to add the icons: font entry on the document.</span>
<span class="nb">window</span><span class="p">.</span><span class="nf">addEventListener</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">load</span><span class="dl">&#39;</span><span class="p">,</span> <span class="nf">function </span><span class="p">()</span> <span class="p">{</span>
  <span class="kd">const</span> <span class="nx">admonitions</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nf">getElementsByClassName</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">admonition-block</span><span class="dl">&#39;</span><span class="p">)</span>
  <span class="k">for </span><span class="p">(</span><span class="kd">let</span> <span class="nx">i</span> <span class="o">=</span> <span class="nx">admonitions</span><span class="p">.</span><span class="nx">length</span> <span class="o">-</span> <span class="mi">1</span><span class="p">;</span> <span class="nx">i</span> <span class="o">&gt;=</span> <span class="mi">0</span><span class="p">;</span> <span class="nx">i</span><span class="o">--</span><span class="p">)</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">elm</span> <span class="o">=</span> <span class="nx">admonitions</span><span class="p">[</span><span class="nx">i</span><span class="p">]</span>
    <span class="kd">const</span> <span class="nx">type</span> <span class="o">=</span> <span class="nx">elm</span><span class="p">.</span><span class="nx">classList</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
    <span class="kd">const</span> <span class="nx">title</span> <span class="o">=</span> <span class="nx">elm</span><span class="p">.</span><span class="nf">getElementsByClassName</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">block-title</span><span class="dl">&#39;</span><span class="p">)[</span><span class="mi">0</span><span class="p">];</span>
	<span class="kd">const</span> <span class="nx">label</span> <span class="o">=</span> <span class="nx">title</span><span class="p">.</span><span class="nf">getElementsByClassName</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">title-label</span><span class="dl">&#39;</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span>
		<span class="p">.</span><span class="nx">innerHTML</span><span class="p">.</span><span class="nf">slice</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">);</span>
    <span class="nx">elm</span><span class="p">.</span><span class="nf">removeChild</span><span class="p">(</span><span class="nx">elm</span><span class="p">.</span><span class="nf">getElementsByClassName</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">block-title</span><span class="dl">&#39;</span><span class="p">)[</span><span class="mi">0</span><span class="p">]);</span>
    <span class="kd">const</span> <span class="nx">text</span> <span class="o">=</span> <span class="nx">elm</span><span class="p">.</span><span class="nx">innerHTML</span>
    <span class="kd">const</span> <span class="nx">parent</span> <span class="o">=</span> <span class="nx">elm</span><span class="p">.</span><span class="nx">parentNode</span>
    <span class="kd">const</span> <span class="nx">tempDiv</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nf">createElement</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">div</span><span class="dl">&#39;</span><span class="p">)</span>
    <span class="nx">tempDiv</span><span class="p">.</span><span class="nx">innerHTML</span> <span class="o">=</span> <span class="s2">`&lt;div class=&#34;admonitionblock </span><span class="p">${</span><span class="nx">type</span><span class="p">}</span><span class="s2">&#34;&gt;
    &lt;table&gt;
      &lt;tbody&gt;
        &lt;tr&gt;
          &lt;td class=&#34;icon&#34;&gt;
            &lt;i class=&#34;fa icon-</span><span class="p">${</span><span class="nx">type</span><span class="p">}</span><span class="s2">&#34; title=&#34;</span><span class="p">${</span><span class="nx">label</span><span class="p">}</span><span class="s2">&#34;&gt;&lt;/i&gt;
          &lt;/td&gt;
          &lt;td class=&#34;content&#34;&gt;
            </span><span class="p">${</span><span class="nx">text</span><span class="p">}</span><span class="s2">
          &lt;/td&gt;
        &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/div&gt;`</span>
    <span class="kd">const</span> <span class="nx">input</span> <span class="o">=</span> <span class="nx">tempDiv</span><span class="p">.</span><span class="nx">childNodes</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
    <span class="nx">parent</span><span class="p">.</span><span class="nf">replaceChild</span><span class="p">(</span><span class="nx">input</span><span class="p">,</span> <span class="nx">elm</span><span class="p">)</span>
  <span class="p">}</span>
<span class="p">})</span></code></pre></figure>
<p>and enabled its minified use on the <code>layouts/partials/extend_footer.html</code> file
adding the following lines to it:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="html">{{- $admonitions := slice (resources.Get &#34;js/adoc-admonitions.js&#34;)
  | resources.Concat &#34;assets/js/adoc-admonitions.js&#34; | minify | fingerprint }}
<span class="nt">&lt;script </span><span class="na">defer</span> <span class="na">crossorigin=</span><span class="s">&#34;anonymous&#34;</span> <span class="na">src=</span><span class="s">&#34;{{ $admonitions.RelPermalink }}&#34;</span>
  <span class="na">integrity=</span><span class="s">&#34;{{ $admonitions.Data.Integrity }}&#34;</span><span class="nt">&gt;&lt;/script&gt;</span></code></pre></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>As I’ve added the font-awesome resources and the standard styling for the
asciidoctor <em>admonitions</em>, there is no need to add anything else to make things
work.</p></aside></li></ol></div></section></section>
<section class="doc-section level-1"><h2 id="_remark42_configuration">Remark42 configuration</h2><p>To integrate <a href="https://remark42.com/" target="_blank" rel="noopener">Remark42</a> with the <strong>PaperMod</strong> theme I’ve
created the file <code>layouts/partials/comments.html</code> with the following content
based on the <a href="https://remark42.com/docs/configuration/frontend/" target="_blank" rel="noopener">remark42
documentation</a>, including extra code to sync the dark/light setting with the
one set on the site:</p>
<figure class="listing-block"><figcaption>comments.html</figcaption>
<pre class="rouge highlight"><code data-lang="html"><span class="nt">&lt;div</span> <span class="na">id=</span><span class="s">&#34;remark42&#34;</span><span class="nt">&gt;&lt;/div&gt;</span>
<span class="nt">&lt;script&gt;</span>
  <span class="kd">var</span> <span class="nx">remark_config</span> <span class="o">=</span> <span class="p">{</span>
    <span class="na">host</span><span class="p">:</span> <span class="p">{{</span> <span class="p">.</span><span class="nx">Site</span><span class="p">.</span><span class="nx">Params</span><span class="p">.</span><span class="nx">remark42Url</span> <span class="p">}},</span>
    <span class="na">site_id</span><span class="p">:</span> <span class="p">{{</span> <span class="p">.</span><span class="nx">Site</span><span class="p">.</span><span class="nx">Params</span><span class="p">.</span><span class="nx">remark42SiteID</span> <span class="p">}},</span>
    <span class="na">url</span><span class="p">:</span> <span class="p">{{</span> <span class="p">.</span><span class="nx">Permalink</span> <span class="p">}},</span>
    <span class="na">locale</span><span class="p">:</span> <span class="p">{{</span> <span class="p">.</span><span class="nx">Site</span><span class="p">.</span><span class="nx">Language</span><span class="p">.</span><span class="nx">Lang</span> <span class="p">}}</span>
  <span class="p">};</span>
  <span class="p">(</span><span class="kd">function</span><span class="p">(</span><span class="nx">c</span><span class="p">)</span> <span class="p">{</span>
    <span class="cm">/* Adjust the theme using the local-storage pref-theme if set */</span>
    <span class="k">if </span><span class="p">(</span><span class="nx">localStorage</span><span class="p">.</span><span class="nf">getItem</span><span class="p">(</span><span class="dl">&#34;</span><span class="s2">pref-theme</span><span class="dl">&#34;</span><span class="p">)</span> <span class="o">===</span> <span class="dl">&#34;</span><span class="s2">dark</span><span class="dl">&#34;</span><span class="p">)</span> <span class="p">{</span>
      <span class="nx">remark_config</span><span class="p">.</span><span class="nx">theme</span> <span class="o">=</span> <span class="dl">&#34;</span><span class="s2">dark</span><span class="dl">&#34;</span><span class="p">;</span>
    <span class="p">}</span> <span class="k">else</span> <span class="k">if </span><span class="p">(</span><span class="nx">localStorage</span><span class="p">.</span><span class="nf">getItem</span><span class="p">(</span><span class="dl">&#34;</span><span class="s2">pref-theme</span><span class="dl">&#34;</span><span class="p">)</span> <span class="o">===</span> <span class="dl">&#34;</span><span class="s2">light</span><span class="dl">&#34;</span><span class="p">)</span> <span class="p">{</span>
      <span class="nx">remark_config</span><span class="p">.</span><span class="nx">theme</span> <span class="o">=</span> <span class="dl">&#34;</span><span class="s2">light</span><span class="dl">&#34;</span><span class="p">;</span>
    <span class="p">}</span>
    <span class="cm">/* Add remark42 widget */</span>
    <span class="k">for</span><span class="p">(</span><span class="kd">var</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="nx">i</span> <span class="o">&lt;</span> <span class="nx">c</span><span class="p">.</span><span class="nx">length</span><span class="p">;</span> <span class="nx">i</span><span class="o">++</span><span class="p">){</span>
      <span class="kd">var</span> <span class="nx">d</span> <span class="o">=</span> <span class="nb">document</span><span class="p">,</span> <span class="nx">s</span> <span class="o">=</span> <span class="nx">d</span><span class="p">.</span><span class="nf">createElement</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">script</span><span class="dl">&#39;</span><span class="p">);</span>
      <span class="nx">s</span><span class="p">.</span><span class="nx">src</span> <span class="o">=</span> <span class="nx">remark_config</span><span class="p">.</span><span class="nx">host</span> <span class="o">+</span> <span class="dl">&#39;</span><span class="s1">/web/</span><span class="dl">&#39;</span> <span class="o">+</span> <span class="nx">c</span><span class="p">[</span><span class="nx">i</span><span class="p">]</span> <span class="o">+</span><span class="dl">&#39;</span><span class="s1">.js</span><span class="dl">&#39;</span><span class="p">;</span>
      <span class="nx">s</span><span class="p">.</span><span class="nx">defer</span> <span class="o">=</span> <span class="kc">true</span><span class="p">;</span>
      <span class="p">(</span><span class="nx">d</span><span class="p">.</span><span class="nx">head</span> <span class="o">||</span> <span class="nx">d</span><span class="p">.</span><span class="nx">body</span><span class="p">).</span><span class="nf">appendChild</span><span class="p">(</span><span class="nx">s</span><span class="p">);</span>
    <span class="p">}</span>
  <span class="p">})(</span><span class="nx">remark_config</span><span class="p">.</span><span class="nx">components</span> <span class="o">||</span> <span class="p">[</span><span class="dl">&#39;</span><span class="s1">embed</span><span class="dl">&#39;</span><span class="p">]);</span>
<span class="nt">&lt;/script&gt;</span></code></pre></figure>
<p>In development I use it with anonymous comments enabled, but to avoid SPAM
the production site uses social logins (for now I’ve only enabled <strong>Github</strong>
&amp; <strong>Google</strong>, if someone requests additional services I’ll check them, but those
were the easy ones for me initially).</p>
<p>To support theme switching with <code>remark42</code> I’ve also added the following inside
the <code>layouts/partials/extend_footer.html</code> file:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="html">{{- if (not site.Params.disableThemeToggle) }}
<span class="nt">&lt;script&gt;</span>
<span class="cm">/* Function to change theme when the toggle button is pressed */</span>
<span class="nb">document</span><span class="p">.</span><span class="nf">getElementById</span><span class="p">(</span><span class="dl">&#34;</span><span class="s2">theme-toggle</span><span class="dl">&#34;</span><span class="p">).</span><span class="nf">addEventListener</span><span class="p">(</span><span class="dl">&#34;</span><span class="s2">click</span><span class="dl">&#34;</span><span class="p">,</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span>
  <span class="k">if </span><span class="p">(</span><span class="k">typeof</span> <span class="nb">window</span><span class="p">.</span><span class="nx">REMARK42</span> <span class="o">!=</span> <span class="dl">&#34;</span><span class="s2">undefined</span><span class="dl">&#34;</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">if </span><span class="p">(</span><span class="nb">document</span><span class="p">.</span><span class="nx">body</span><span class="p">.</span><span class="nx">className</span><span class="p">.</span><span class="nf">includes</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">dark</span><span class="dl">&#39;</span><span class="p">))</span> <span class="p">{</span>
      <span class="nb">window</span><span class="p">.</span><span class="nx">REMARK42</span><span class="p">.</span><span class="nf">changeTheme</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">light</span><span class="dl">&#39;</span><span class="p">);</span>
    <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
      <span class="nb">window</span><span class="p">.</span><span class="nx">REMARK42</span><span class="p">.</span><span class="nf">changeTheme</span><span class="p">(</span><span class="dl">&#39;</span><span class="s1">dark</span><span class="dl">&#39;</span><span class="p">);</span>
    <span class="p">}</span>
  <span class="p">}</span>
<span class="p">});</span>
<span class="nt">&lt;/script&gt;</span>
{{- end }}</code></pre></div>
<p>With this code if the <code>theme-toggle</code> button is pressed we change the <code>remark42</code>
theme before the <code>PaperMod</code> one (that’s needed here only, on page loads the
<code>remark42</code> theme is synced with the main one using the code from the
<code>layouts/partials/comments.html</code> shown earlier).</p></section>
<section class="doc-section level-1"><h2 id="_development_setup">Development setup</h2><p>To preview the site on my laptop I’m using <code>docker-compose</code> with the following
configuration:</p>
<figure class="listing-block"><figcaption>docker-compose.yaml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">services</span><span class="pi">:</span>
  <span class="na">hugo</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/hugo-adoc:latest</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">hugo-adoc-blogops</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">always</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">.:/documents</span>
    <span class="na">command</span><span class="pi">:</span> <span class="s">server --bind 0.0.0.0 -D -F</span>
    <span class="na">user</span><span class="pi">:</span> <span class="s">${APP_UID}:${APP_GID}</span>
  <span class="na">nginx</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">nginx:latest</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">nginx-blogops</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">always</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./nginx/default.conf:/etc/nginx/conf.d/default.conf</span>
    <span class="na">ports</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">1313:1313</span>
  <span class="na">remark42</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/remark42:latest</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">remark42-blogops</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">always</span>
    <span class="na">env_file</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./.env</span>
      <span class="pi">-</span> <span class="s">./remark42/env.dev</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./remark42/var.dev:/srv/var</span></code></pre></figure>
<p>To run it properly we have to create the <code>.env</code> file with the current user ID
and GID on the variables <code>APP_UID</code> and <code>APP_GID</code> (if we don’t do it the files
can end up being owned by a user that is not the same as the one running the
services):</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">echo</span> <span class="s2">&#34;APP_UID=</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-u</span><span class="si">)</span><span class="se">\n</span><span class="s2">APP_GID=</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-g</span><span class="si">)</span><span class="s2">&#34;</span> <span class="o">&gt;</span> .env</code></pre></div>
<p>The <code>Dockerfile</code> used to generate the <code>hugo-adoc</code> image is:</p>
<figure class="listing-block"><figcaption>Dockerfile</figcaption>
<pre class="rouge highlight"><code data-lang="Dockerfile"><span class="k">FROM</span><span class="w"> </span><span class="s">golang:1.24.1-alpine3.21</span><span class="w"> </span><span class="k">AS</span><span class="w"> </span><span class="s">build</span>
<span class="k">ARG</span><span class="s"> HUGO_BUILD_TAGS=extended</span>
<span class="k">ARG</span><span class="s"> CGO=1</span>
<span class="k">ENV</span><span class="s"> CGO_ENABLED=${CGO}</span>
<span class="k">ENV</span><span class="s"> GOOS=linux</span>
<span class="k">ENV</span><span class="s"> GO111MODULE=on</span>
<span class="k">ENV</span><span class="s"> HUGO_VERSION=v0.145.0</span>
<span class="k">ENV</span><span class="s"> MAGE_VERSION=v1.15.0</span>
<span class="k">WORKDIR</span><span class="s"> /go/src/github.com/gohugoio/hugo</span>
<span class="k">RUN </span>apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="c"># gcc/g++ are required to build SASS libraries for extended version</span>
 apk add --no-cache curl gcc g++ musl-dev git &amp;&amp;\
 download_url=&#34;https://api.github.com/repos/gohugoio/hugo/tarball/${HUGO_VERSION}&#34; &amp;&amp;\
 echo &#34;$download_url&#34; &amp;&amp;\
 curl -sL &#34;$download_url&#34; -o /tmp/hugo.tgz &amp;&amp;\
 tar xf /tmp/hugo.tgz -C . --strip-components=1 &amp;&amp;\
 go install github.com/magefile/mage@${MAGE_VERSION} &amp;&amp;\
 mage hugo &amp;&amp;\
 mage install &amp;&amp;\
 cd / &amp;&amp;\
 rm -rf /tmp/hugo.tgz /go/src/github.com/gohugoio/hugo/*

<span class="k">FROM</span><span class="s"> asciidoctor/docker-asciidoctor:1.82</span>
<span class="k">COPY</span><span class="s"> --from=build /go/bin/hugo /usr/bin/hugo</span>
<span class="k">RUN </span>gem <span class="nb">install</span> <span class="nt">--no-document</span> asciidoctor-html5s <span class="o">&amp;&amp;</span><span class="se">\
</span> apk update <span class="o">&amp;&amp;</span><span class="se">\
</span> apk add <span class="nt">--no-cache</span> ca-certificates libc6-compat libstdc++ git <span class="o">&amp;&amp;</span><span class="se">\
</span> /usr/bin/hugo version <span class="o">&amp;&amp;</span><span class="se">\
</span> <span class="nb">rm</span> <span class="nt">-rf</span> /var/cache/apk/<span class="k">*</span>
<span class="c"># Expose port for live server</span>
<span class="k">EXPOSE</span><span class="s"> 1313</span>
<span class="k">ENTRYPOINT</span><span class="s"> [&#34;/usr/bin/hugo&#34;]</span>
<span class="k">CMD</span><span class="s"> [&#34;&#34;]</span></code></pre></figure>
<p>If you review it you will see that I’m using the
<a href="https://github.com/asciidoctor/docker-asciidoctor" target="_blank" rel="noopener">docker-asciidoctor</a> image as
the base; the idea is that this image has all I need to work with <code>asciidoctor</code>
and to use <code>hugo</code> I download the code and compile it on a builder container.</p>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>Originally I downloaded the binary from their latest release page on
<a href="https://github.com/gohugoio/hugo/releases/latest" target="_blank" rel="noopener">github</a> but it stoped
working with <a href="https://www.alpinelinux.org/" target="_blank" rel="noopener">alpine</a> even if we install the
<code>libc6-compat</code> package, so I decided to compile it myself.</p></aside>
<p>The image does not launch the server by default because I don’t want it to; in
fact I use the same <code>docker-compose.yml</code> file to publish the site in production
simply calling the container without the arguments passed on the
<code>docker-compose.yml</code> file (see later).</p>
<p>When running the containers with <code>docker-compose up</code> (or <code>docker compose up</code> if
you have the <code>docker-compose-plugin</code> package installed) we also launch a <code>nginx</code>
container and the <code>remark42</code> service so we can test everything together.</p>
<p>The <code>Dockerfile</code> for the <code>remark42</code> image is the original one with an updated
version of the <code>init.sh</code> script:</p>
<figure class="listing-block"><figcaption>Dockerfile</figcaption>
<pre class="rouge highlight"><code data-lang="Dockerfile"><span class="k">FROM</span><span class="s"> umputun/remark42:latest</span>
<span class="k">COPY</span><span class="s"> init.sh /init.sh</span></code></pre></figure>
<p>The updated <code>init.sh</code> is similar to the original, but allows us to use an
<code>APP_GID</code> variable and updates the <code>/etc/group</code> file of the container so the
files get the right user and group (with the original script the group is
always <code>1001</code>):</p>
<figure class="listing-block"><figcaption>init.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/sbin/dinit /bin/sh</span>

<span class="nv">uid</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-u</span><span class="si">)</span><span class="s2">&#34;</span>

<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">uid</span><span class="k">}</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;init container&#34;</span>

  <span class="c"># set container&#39;s time zone</span>
  <span class="nb">cp</span> <span class="s2">&#34;/usr/share/zoneinfo/</span><span class="k">${</span><span class="nv">TIME_ZONE</span><span class="k">}</span><span class="s2">&#34;</span> /etc/localtime
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">TIME_ZONE</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">&gt;</span>/etc/timezone
  <span class="nb">echo</span> <span class="s2">&#34;set timezone </span><span class="k">${</span><span class="nv">TIME_ZONE</span><span class="k">}</span><span class="s2"> (</span><span class="si">$(</span><span class="nb">date</span><span class="si">)</span><span class="s2">)&#34;</span>

  <span class="c"># set UID &amp; GID for the app</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">APP_UID</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">APP_GID</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
    <span class="o">[</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">APP_UID</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nv">APP_UID</span><span class="o">=</span><span class="s2">&#34;1001&#34;</span>
    <span class="o">[</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">APP_GID</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nv">APP_GID</span><span class="o">=</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">APP_UID</span><span class="k">}</span><span class="s2">&#34;</span>
    <span class="nb">echo</span> <span class="s2">&#34;set custom APP_UID=</span><span class="k">${</span><span class="nv">APP_UID</span><span class="k">}</span><span class="s2"> &amp; APP_GID=</span><span class="k">${</span><span class="nv">APP_GID</span><span class="k">}</span><span class="s2">&#34;</span>
    <span class="nb">sed</span> <span class="nt">-i</span> <span class="s2">&#34;s/^app:x:1001:1001:/app:x:</span><span class="k">${</span><span class="nv">APP_UID</span><span class="k">}</span><span class="s2">:</span><span class="k">${</span><span class="nv">APP_GID</span><span class="k">}</span><span class="s2">:/&#34;</span> /etc/passwd
    <span class="nb">sed</span> <span class="nt">-i</span> <span class="s2">&#34;s/^app:x:1001:/app:x:</span><span class="k">${</span><span class="nv">APP_GID</span><span class="k">}</span><span class="s2">:/&#34;</span> /etc/group
  <span class="k">else
    </span><span class="nb">echo</span> <span class="s2">&#34;custom APP_UID and/or APP_GID not defined, using 1001:1001&#34;</span>
  <span class="k">fi
  </span><span class="nb">chown</span> <span class="nt">-R</span> app:app /srv /home/app
<span class="k">fi

</span><span class="nb">echo</span> <span class="s2">&#34;prepare environment&#34;</span>

<span class="c"># replace {% REMARK_URL %} by content of REMARK_URL variable</span>
find /srv <span class="nt">-regex</span> <span class="s1">&#39;.*\.\(html\|js\|mjs\)$&#39;</span> <span class="nt">-print</span> <span class="se">\</span>
  <span class="nt">-exec</span> <span class="nb">sed</span> <span class="nt">-i</span> <span class="s2">&#34;s|{% REMARK_URL %}|</span><span class="k">${</span><span class="nv">REMARK_URL</span><span class="k">}</span><span class="s2">|g&#34;</span> <span class="o">{}</span> <span class="se">\;</span>

<span class="k">if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">SITE_ID</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
  <span class="c">#replace &#34;site_id: &#39;remark&#39;&#34; by SITE_ID</span>
  <span class="nb">sed</span> <span class="nt">-i</span> <span class="s2">&#34;s|&#39;remark&#39;|&#39;</span><span class="k">${</span><span class="nv">SITE_ID</span><span class="k">}</span><span class="s2">&#39;|g&#34;</span> /srv/web/<span class="k">*</span>.html
<span class="k">fi

</span><span class="nb">echo</span> <span class="s2">&#34;execute </span><span class="se">\&#34;</span><span class="nv">$*</span><span class="se">\&#34;</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">uid</span><span class="k">}</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">exec </span>su-exec app <span class="s2">&#34;</span><span class="nv">$@</span><span class="s2">&#34;</span>
<span class="k">else
  </span><span class="nb">exec</span> <span class="s2">&#34;</span><span class="nv">$@</span><span class="s2">&#34;</span>
<span class="k">fi</span></code></pre></figure>
<p>The environment file used with <code>remark42</code> for development is quite minimal:</p>
<figure class="listing-block"><figcaption>env.dev</figcaption>
<pre class="rouge highlight"><code data-lang="ini"><span class="py">TIME_ZONE</span><span class="p">=</span><span class="s">Europe/Madrid</span>
<span class="py">REMARK_URL</span><span class="p">=</span><span class="s">http://localhost:1313/remark42</span>
<span class="py">SITE</span><span class="p">=</span><span class="s">blogops</span>
<span class="py">SECRET</span><span class="p">=</span><span class="s">123456</span>
<span class="py">ADMIN_SHARED_ID</span><span class="p">=</span><span class="s">sto</span>
<span class="py">AUTH_ANON</span><span class="p">=</span><span class="s">true</span>
<span class="py">EMOJI</span><span class="p">=</span><span class="s">true</span></code></pre></figure>
<p>And the <code>nginx/default.conf</code> file used to publish the service locally is simple
too:</p>
<figure class="listing-block"><figcaption>default.conf</figcaption>
<pre class="rouge highlight"><code data-lang="nginx"><span class="k">server</span> <span class="p">{</span> 
 <span class="kn">listen</span> <span class="mi">1313</span><span class="p">;</span>
 <span class="kn">server_name</span> <span class="s">localhost</span><span class="p">;</span>
 <span class="kn">location</span> <span class="n">/</span> <span class="p">{</span>
    <span class="kn">proxy_pass</span> <span class="s">http://hugo:1313</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">Host</span> <span class="nv">$http_host</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">X-Real-IP</span> <span class="nv">$remote_addr</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-For</span> <span class="nv">$proxy_add_x_forwarded_for</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-Proto</span> <span class="nv">$scheme</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">Upgrade</span> <span class="nv">$http_upgrade</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">Connection</span> <span class="s">&#34;upgrade&#34;</span><span class="p">;</span>
 <span class="p">}</span>
 <span class="kn">location</span> <span class="n">/remark42/</span> <span class="p">{</span>
    <span class="kn">rewrite</span> <span class="n">/remark42/</span><span class="s">(.*)</span> <span class="n">/</span><span class="nv">$1</span> <span class="s">break</span><span class="p">;</span>
    <span class="kn">proxy_pass</span> <span class="s">http://remark42:8080/</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">Host</span> <span class="nv">$http_host</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">X-Real-IP</span> <span class="nv">$remote_addr</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-For</span> <span class="nv">$proxy_add_x_forwarded_for</span><span class="p">;</span>
    <span class="kn">proxy_set_header</span> <span class="s">X-Forwarded-Proto</span> <span class="nv">$scheme</span><span class="p">;</span>
  <span class="p">}</span>
<span class="p">}</span></code></pre></figure></section>
<section class="doc-section level-1"><h2 id="_production_setup">Production setup</h2><p>The VM where I’m publishing the blog runs <a href="https://debian.org" target="_blank" rel="noopener">Debian GNU/Linux</a>
and uses binaries from local packages and applications packaged inside
containers.</p>
<p>To run the containers I’m using
<a href="https://docs.docker.com/engine/install/debian/" target="_blank" rel="noopener">docker-ce</a> (I could have used
<a href="https://podman.io/" target="_blank" rel="noopener">podman</a> instead, but I already had it installed on the
machine, so I stayed with it).</p>
<p>The binaries used on this project are included on the following packages from
the <code>main</code> Debian repository:</p>
<div class="ulist"><ul><li><code>git</code> to clone &amp; pull the repository,</li><li><code>jq</code> to parse <code>json</code> files from <em>shell scripts</em>,</li><li><code>json2file-go</code> to save the webhook messages to files,</li><li><code>inotify-tools</code> to detect when new files are stored by <code>json2file-go</code> and
launch scripts to process them,</li><li><code>nginx</code> to publish the site using HTTPS and work as proxy for
<code>json2file-go</code> and <code>remark42</code> (I run it using a container),</li><li><code>task-spool</code> to queue the scripts that update the deployment.</li></ul></div>
<p>And I’m using <code>docker</code> and <code>docker compose</code> from the debian packages on the
<code>docker</code> repository:</p>
<div class="ulist"><ul><li><code>docker-ce</code> to run the containers,</li><li><code>docker-compose-plugin</code> to run <code>docker compose</code> (it is a plugin, so no <code>-</code> in
the name).</li></ul></div>
<aside class="admonition-block note" role="note"><h6 class="block-title label-only"><span class="title-label">Note: </span></h6><p>On the following sections I’m assuming that the user doing the work belongs to
the <code>docker</code> group, that is, has permission to run <code>docker</code>.</p></aside>
<section class="doc-section level-2"><h3 id="_repository_checkout">Repository checkout</h3><p>To manage the <code>git</code> repository I’ve created a deploy key, added it to <code>forgejo</code>
and cloned the project on the <code>/srv/blogops</code> PATH (that route is owned by a
regular user that has permissions to run <code>docker</code>, as I said before).</p></section>
<section class="doc-section level-2"><h3 id="_compiling_the_site_with_hugo">Compiling the site with <code>hugo</code></h3><p>To compile the site we are using the <code>docker-compose.yml</code> file seen before, to
be able to run it first we build the container images and once we have them we
launch <code>hugo</code> using <code>docker compose run</code>:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="console"><span class="gp">$</span><span class="w"> </span><span class="nb">cd</span> /srv/blogops
<span class="gp">$</span><span class="w"> </span>git pull
<span class="gp">$</span><span class="w"> </span>docker compose build
<span class="gp">$</span><span class="w"> </span><span class="k">if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;./public&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then </span><span class="nb">rm</span> <span class="nt">-rf</span> ./public<span class="p">;</span> <span class="k">fi</span>
<span class="gp">$</span><span class="w"> </span>docker compose run hugo <span class="nt">--</span></code></pre></div>
<p>The compilation leaves the static HTML on <code>/srv/blogops/public</code> (we remove the
directory first because <code>hugo</code> does not clean the destination folder as
<code>jekyll</code> does).</p>
<p>The deploy script re-generates the site as described and moves the <code>public</code>
directory to its final place for publishing.</p></section>
<section class="doc-section level-2"><h3 id="_running_remark42_with_docker">Running <code>remark42</code> with docker</h3><p>On the <code>/srv/blogops/remark42</code> folder I have the following <code>docker-compose.yml</code>:</p>
<figure class="listing-block"><figcaption>docker-compose.yml</figcaption>
<pre class="rouge highlight"><code data-lang="yaml"><span class="na">version</span><span class="pi">:</span> <span class="s2">&#34;</span><span class="s">2&#34;</span>
<span class="na">services</span><span class="pi">:</span>
  <span class="na">remark42</span><span class="pi">:</span>
    <span class="na">image</span><span class="pi">:</span> <span class="s">forgejo.mixinet.net/oci/remark42:latest</span>
    <span class="na">env_file</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">../.env</span>
      <span class="pi">-</span> <span class="s">./env.prod</span>
    <span class="na">container_name</span><span class="pi">:</span> <span class="s">remark42</span>
    <span class="na">restart</span><span class="pi">:</span> <span class="s">always</span>
    <span class="na">volumes</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">./var.prod:/srv/var</span>
    <span class="na">ports</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">127.0.0.1:8042:8080</span></code></pre></figure>
<p>The <code>../.env</code> file is loaded to get the <code>APP_UID</code> and <code>APP_GID</code> variables that
are used by my version of the <code>init.sh</code> script to adjust file permissions and
the <code>env.prod</code> file contains the rest of the settings for <code>remark42</code>, including
the social network tokens (see the
<a href="https://remark42.com/docs/configuration/parameters/" target="_blank" rel="noopener">remark42 documentation</a> for
the available parameters, I don’t include my configuration here because some of
them are <em>secrets</em>).</p></section>
<section class="doc-section level-2"><h3 id="_nginx_configuration">Nginx configuration</h3><p>The <code>nginx</code> configuration for the <code>blogops.mixinet.net</code> site is as simple as:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="nginx"><span class="k">server</span> <span class="p">{</span>
  <span class="kn">listen</span> <span class="mi">443</span> <span class="s">ssl</span> <span class="s">http2</span><span class="p">;</span>
  <span class="kn">server_name</span> <span class="s">blogops.mixinet.net</span><span class="p">;</span>
  <span class="kn">ssl_certificate</span> <span class="n">/etc/letsencrypt/live/blogops.mixinet.net/fullchain.pem</span><span class="p">;</span>
  <span class="kn">ssl_certificate_key</span> <span class="n">/etc/letsencrypt/live/blogops.mixinet.net/privkey.pem</span><span class="p">;</span>
  <span class="kn">include</span> <span class="n">/etc/letsencrypt/options-ssl-nginx.conf</span><span class="p">;</span>
  <span class="kn">ssl_dhparam</span> <span class="n">/etc/letsencrypt/ssl-dhparams.pem</span><span class="p">;</span>
  <span class="kn">access_log</span> <span class="n">/var/log/nginx/blogops.mixinet.net-443.access.log</span><span class="p">;</span>
  <span class="kn">error_log</span>  <span class="n">/var/log/nginx/blogops.mixinet.net-443.error.log</span><span class="p">;</span>
  <span class="kn">root</span> <span class="n">/srv/blogops/nginx/public_html</span><span class="p">;</span>
  <span class="kn">location</span> <span class="n">/</span> <span class="p">{</span>
    <span class="kn">try_files</span> <span class="nv">$uri</span> <span class="nv">$uri</span><span class="n">/</span> <span class="p">=</span><span class="mi">404</span><span class="p">;</span>
  <span class="p">}</span>
  <span class="kn">include</span> <span class="n">/srv/blogops/nginx/remark42.conf</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">server</span> <span class="p">{</span>
  <span class="kn">listen</span> <span class="mi">80</span> <span class="p">;</span>
  <span class="kn">listen</span> <span class="s">[::]:80</span> <span class="p">;</span>
  <span class="kn">server_name</span> <span class="s">blogops.mixinet.net</span><span class="p">;</span>
  <span class="kn">access_log</span> <span class="n">/var/log/nginx/blogops.mixinet.net-80.access.log</span><span class="p">;</span>
  <span class="kn">error_log</span>  <span class="n">/var/log/nginx/blogops.mixinet.net-80.error.log</span><span class="p">;</span>
  <span class="kn">if</span> <span class="s">(</span><span class="nv">$host</span> <span class="p">=</span> <span class="s">blogops.mixinet.net)</span> <span class="p">{</span>
    <span class="kn">return</span> <span class="mi">301</span> <span class="s">https://</span><span class="nv">$host$request_uri</span><span class="p">;</span>
  <span class="p">}</span>
  <span class="kn">return</span> <span class="mi">404</span><span class="p">;</span>
<span class="p">}</span></code></pre></div>
<p>On this configuration the certificates are managed by
<a href="https://certbot.eff.org/">certbot</a> and the server root directory is on
<code>/srv/blogops/nginx/public_html</code> and not on <code>/srv/blogops/public</code>; the reason
for that is that I want to be able to compile without affecting the running
site, the deployment script generates the site on <code>/srv/blogops/public</code> and if
all works well we rename folders to do the switch, making the change feel almost
atomic.</p></section>
<section class="doc-section level-2"><h3 id="_json2file_go_configuration">json2file-go configuration</h3><p>As I have a working <a href="https://www.wireguard.com/">WireGuard VPN</a> between the
machine running <code>forgejo</code> at my home and the VM where the blog is served, I’m
going to configure the <code>json2file-go</code> to listen for connections on a high port
using a self signed certificate and listening on IP addresses only reachable
through the VPN.</p>
<p>To do it we create a <code>systemd socket</code> to run <code>json2file-go</code> and adjust its
configuration to listen on a private IP (we use the <code>FreeBind</code> option on its
definition to be able to launch the service even when the IP is not available,
that is, when the VPN is down).</p>
<p>The following script can be used to set up the <code>json2file-go</code> configuration:</p>
<figure class="listing-block"><figcaption>setup-json2file.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>

<span class="nv">BASE_DIR</span><span class="o">=</span><span class="s2">&#34;/srv/blogops/webhook&#34;</span>
<span class="nv">J2F_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/json2file&#34;</span>
<span class="nv">TLS_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/tls&#34;</span>

<span class="nv">J2F_SERVICE_NAME</span><span class="o">=</span><span class="s2">&#34;json2file-go&#34;</span>
<span class="nv">J2F_SERVICE_DIR</span><span class="o">=</span><span class="s2">&#34;/etc/systemd/system/json2file-go.service.d&#34;</span>
<span class="nv">J2F_SERVICE_OVERRIDE</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$J2F_SERVICE_DIR</span><span class="s2">/override.conf&#34;</span>
<span class="nv">J2F_SOCKET_DIR</span><span class="o">=</span><span class="s2">&#34;/etc/systemd/system/json2file-go.socket.d&#34;</span>
<span class="nv">J2F_SOCKET_OVERRIDE</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$J2F_SOCKET_DIR</span><span class="s2">/override.conf&#34;</span>

<span class="nv">J2F_BASEDIR_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/json2file-go/basedir&#34;</span>
<span class="nv">J2F_DIRLIST_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/json2file-go/dirlist&#34;</span>
<span class="nv">J2F_CRT_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/json2file-go/certfile&#34;</span>
<span class="nv">J2F_KEY_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/json2file-go/keyfile&#34;</span>
<span class="nv">J2F_CRT_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$TLS_DIR</span><span class="s2">/crt.pem&#34;</span>
<span class="nv">J2F_KEY_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$TLS_DIR</span><span class="s2">/key.pem&#34;</span>

<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>

<span class="c"># Install packages used with json2file for the blogops site</span>
<span class="nb">sudo </span>apt update
<span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> json2file-go uuid
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">type </span>mkcert<span class="si">)</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> mkcert
<span class="k">fi
</span><span class="nb">sudo </span>apt clean

<span class="c"># Configuration file values</span>
<span class="nv">J2F_USER</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-u</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">J2F_GROUP</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-g</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">J2F_DIRLIST</span><span class="o">=</span><span class="s2">&#34;blogops:</span><span class="si">$(</span>uuid<span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">J2F_LISTEN_STREAM</span><span class="o">=</span><span class="s2">&#34;172.31.31.1:4443&#34;</span>

<span class="c"># Configure json2file</span>
<span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$J2F_DIR</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$J2F_DIR</span><span class="s2">&#34;</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;echo &#39;</span><span class="nv">$J2F_DIR</span><span class="s2">&#39; &gt;&#39;</span><span class="nv">$J2F_BASEDIR_FILE</span><span class="s2">&#39;&#34;</span>
<span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$TLS_DIR</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$TLS_DIR</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$J2F_CRT_PATH</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$J2F_KEY_PATH</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>mkcert <span class="nt">-cert-file</span> <span class="s2">&#34;</span><span class="nv">$J2F_CRT_PATH</span><span class="s2">&#34;</span> <span class="nt">-key-file</span> <span class="s2">&#34;</span><span class="nv">$J2F_KEY_PATH</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">hostname</span> <span class="nt">-f</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="k">fi
</span><span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;echo &#39;</span><span class="nv">$J2F_CRT_PATH</span><span class="s2">&#39; &gt;&#39;</span><span class="nv">$J2F_CRT_FILE</span><span class="s2">&#39;&#34;</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;echo &#39;</span><span class="nv">$J2F_KEY_PATH</span><span class="s2">&#39; &gt;&#39;</span><span class="nv">$J2F_KEY_FILE</span><span class="s2">&#39;&#34;</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt;&#39;</span><span class="nv">$J2F_DIRLIST_FILE</span><span class="s2">&#39;&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$J2F_DIRLIST</span><span class="s2">&#34;</span> | <span class="nb">tr</span> <span class="s1">&#39;;&#39;</span> <span class="s1">&#39;\n&#39;</span><span class="si">)</span><span class="sh">
</span><span class="no">EOF

</span><span class="c"># Service override</span>
<span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$J2F_SERVICE_DIR</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">sudo mkdir</span> <span class="s2">&#34;</span><span class="nv">$J2F_SERVICE_DIR</span><span class="s2">&#34;</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt;&#39;</span><span class="nv">$J2F_SERVICE_OVERRIDE</span><span class="s2">&#39;&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
[Service]
User=</span><span class="nv">$J2F_USER</span><span class="sh">
Group=</span><span class="nv">$J2F_GROUP</span><span class="sh">
</span><span class="no">EOF

</span><span class="c"># Socket override</span>
<span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$J2F_SOCKET_DIR</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">sudo mkdir</span> <span class="s2">&#34;</span><span class="nv">$J2F_SOCKET_DIR</span><span class="s2">&#34;</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt;&#39;</span><span class="nv">$J2F_SOCKET_OVERRIDE</span><span class="s2">&#39;&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
[Socket]
# Set FreeBind to listen on missing addresses (the VPN can be down sometimes)
FreeBind=true
# Set ListenStream to nothing to clear its value and add the new value later
ListenStream=
ListenStream=</span><span class="nv">$J2F_LISTEN_STREAM</span><span class="sh">
</span><span class="no">EOF

</span><span class="c"># Restart and enable service</span>
<span class="nb">sudo </span>systemctl daemon-reload
<span class="nb">sudo </span>systemctl stop <span class="s2">&#34;</span><span class="nv">$J2F_SERVICE_NAME</span><span class="s2">&#34;</span>
<span class="nb">sudo </span>systemctl start <span class="s2">&#34;</span><span class="nv">$J2F_SERVICE_NAME</span><span class="s2">&#34;</span>
<span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="s2">&#34;</span><span class="nv">$J2F_SERVICE_NAME</span><span class="s2">&#34;</span>

<span class="c"># ----</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure>
<section class="admonition-block warning" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Warning: </span></h6><p>The script uses <code>mkcert</code> to create the temporary certificates, to install the
package on <code>bullseye</code> the <code>backports</code> repository must be available.</p></section></section>
<section class="doc-section level-2"><h3 id="_forgejo_configuration">Forgejo configuration</h3><p>To make forgejo use our <code>json2file-go</code> server we go to the project and enter
into the <code>hooks/forgejo/new</code> page, once there we create a new webhook of type
<code>forgejo</code> and set the target URL to <code><a class="bare" href="https://172.31.31.1:4443/blogops">https://172.31.31.1:4443/blogops</a></code> and on
the secret field we put the token generated with <code>uuid</code> by the setup script:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="shell"><span class="nb">sed</span> <span class="nt">-n</span> <span class="nt">-e</span> <span class="s1">&#39;s/blogops://p&#39;</span> /etc/json2file-go/dirlist</code></pre></div>
<p>The rest of the settings can be left as they are:</p>
<div class="ulist"><ul><li>Trigger on: <em>Push events</em></li><li>Branch filter: <code>*</code></li></ul></div>
<section class="admonition-block warning" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Warning: </span></h6><p>We are using an internal IP and a self signed certificate, that means that we
have to review that the <code>webhook</code> section of the <code>app.ini</code> of our <code>forgejo</code>
server allows us to call the IP and skips the TLS verification (you can see the
available options on the
<a href="https://forgejo.org/docs/latest/admin/config-cheat-sheet/#webhook-webhook">forgejo
documentation</a>).</p>
<p>The <code>[webhook]</code> section of my server looks like this:</p>
<div class="listing-block"><pre class="rouge highlight"><code data-lang="ini"><span class="nn">[webhook]</span>
<span class="py">ALLOWED_HOST_LIST</span><span class="p">=</span><span class="s">private</span>
<span class="py">SKIP_TLS_VERIFY</span><span class="p">=</span><span class="s">true</span></code></pre></div></section>
<p>Once we have the <code>webhook</code> configured we can try it and if it works our
<code>json2file</code> server will store the file on the
<code>/srv/blogops/webhook/json2file/blogops/</code> folder.</p></section>
<section class="doc-section level-2"><h3 id="_the_json2file_spooler_script">The json2file spooler script</h3><p>With the previous configuration our system is ready to receive webhook calls
from <code>forgejo</code> and store the messages on files, but we have to do something to
process those files once they are saved in our machine.</p>
<p>An option could be to use a <code>cronjob</code> to look for new files, but we can do
better on Linux using <code>inotify</code> …​ we will use the <code>inotifywait</code> command from
<code>inotify-tools</code> to watch the <code>json2file</code> output directory and execute a script
each time a new file is moved inside it or closed after writing
(<code>IN_CLOSE_WRITE</code> and <code>IN_MOVED_TO</code> events).</p>
<p>To avoid concurrency problems we are going to use <code>task-spooler</code> to launch the
scripts that process the webhooks using a queue of length 1, so they are
executed one by one in a FIFO queue.</p>
<p>The spooler script is this:</p>
<figure class="listing-block"><figcaption>blogops-spooler.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>

<span class="nv">BASE_DIR</span><span class="o">=</span><span class="s2">&#34;/srv/blogops/webhook&#34;</span>
<span class="nv">BIN_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/bin&#34;</span>
<span class="nv">TSP_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/tsp&#34;</span>

<span class="nv">WEBHOOK_COMMAND</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BIN_DIR</span><span class="s2">/blogops-webhook.sh&#34;</span>

<span class="c"># ---------</span>
<span class="c"># FUNCTIONS</span>
<span class="c"># ---------</span>

queue_job<span class="o">()</span> <span class="o">{</span>
  <span class="nb">echo</span> <span class="s2">&#34;Queuing job to process file &#39;</span><span class="nv">$1</span><span class="s2">&#39;&#34;</span>
  <span class="nv">TMPDIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$TSP_DIR</span><span class="s2">&#34;</span> <span class="nv">TS_SLOTS</span><span class="o">=</span><span class="s2">&#34;1&#34;</span> <span class="nv">TS_MAXFINISHED</span><span class="o">=</span><span class="s2">&#34;10&#34;</span> <span class="se">\</span>
    tsp <span class="nt">-n</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_COMMAND</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
<span class="o">}</span>

<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>

<span class="nv">INPUT_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$INPUT_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">echo</span> <span class="s2">&#34;Input directory &#39;</span><span class="nv">$INPUT_DIR</span><span class="s2">&#39; does not exist, aborting!&#34;</span>
  <span class="nb">exit </span>1
<span class="k">fi</span>

<span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$TSP_DIR</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$TSP_DIR</span><span class="s2">&#34;</span>

<span class="nb">echo</span> <span class="s2">&#34;Processing existing files under &#39;</span><span class="nv">$INPUT_DIR</span><span class="s2">&#39;&#34;</span>
find <span class="s2">&#34;</span><span class="nv">$INPUT_DIR</span><span class="s2">&#34;</span> <span class="nt">-type</span> f | <span class="nb">sort</span> | <span class="k">while </span><span class="nb">read</span> <span class="nt">-r</span> _filename<span class="p">;</span> <span class="k">do
  </span>queue_job <span class="s2">&#34;</span><span class="nv">$_filename</span><span class="s2">&#34;</span>
<span class="k">done</span>

<span class="c"># Use inotifywatch to process new files</span>
<span class="nb">echo</span> <span class="s2">&#34;Watching for new files under &#39;</span><span class="nv">$INPUT_DIR</span><span class="s2">&#39;&#34;</span>
inotifywait <span class="nt">-q</span> <span class="nt">-m</span> <span class="nt">-e</span> close_write,moved_to <span class="nt">--format</span> <span class="s2">&#34;%w%f&#34;</span> <span class="nt">-r</span> <span class="s2">&#34;</span><span class="nv">$INPUT_DIR</span><span class="s2">&#34;</span> |
  <span class="k">while </span><span class="nb">read</span> <span class="nt">-r</span> _filename<span class="p">;</span> <span class="k">do
    </span>queue_job <span class="s2">&#34;</span><span class="nv">$_filename</span><span class="s2">&#34;</span>
  <span class="k">done</span>

<span class="c"># ----</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure>
<p>To run it as a daemon we install it as a <code>systemd service</code> using the following
script:</p>
<figure class="listing-block"><figcaption>setup-spooler.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>

<span class="nv">BASE_DIR</span><span class="o">=</span><span class="s2">&#34;/srv/blogops/webhook&#34;</span>
<span class="nv">BIN_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/bin&#34;</span>
<span class="nv">J2F_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/json2file&#34;</span>

<span class="nv">SPOOLER_COMMAND</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BIN_DIR</span><span class="s2">/blogops-spooler.sh &#39;</span><span class="nv">$J2F_DIR</span><span class="s2">&#39;&#34;</span>
<span class="nv">SPOOLER_SERVICE_NAME</span><span class="o">=</span><span class="s2">&#34;blogops-j2f-spooler&#34;</span>
<span class="nv">SPOOLER_SERVICE_FILE</span><span class="o">=</span><span class="s2">&#34;/etc/systemd/system/</span><span class="nv">$SPOOLER_SERVICE_NAME</span><span class="s2">.service&#34;</span>

<span class="c"># Configuration file values</span>
<span class="nv">J2F_USER</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-u</span><span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">J2F_GROUP</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">id</span> <span class="nt">-g</span><span class="si">)</span><span class="s2">&#34;</span>

<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>

<span class="c"># Install packages used with the webhook processor</span>
<span class="nb">sudo </span>apt update
<span class="nb">sudo </span>apt <span class="nb">install</span> <span class="nt">-y</span> inotify-tools jq task-spooler
<span class="nb">sudo </span>apt clean

<span class="c"># Configure process service</span>
<span class="nb">sudo </span>sh <span class="nt">-c</span> <span class="s2">&#34;cat &gt; </span><span class="nv">$SPOOLER_SERVICE_FILE</span><span class="s2">&#34;</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
[Install]
WantedBy=multi-user.target
[Unit]
Description=json2file processor for </span><span class="nv">$J2F_USER</span><span class="sh">
After=docker.service
[Service]
Type=simple
User=</span><span class="nv">$J2F_USER</span><span class="sh">
Group=</span><span class="nv">$J2F_GROUP</span><span class="sh">
ExecStart=</span><span class="nv">$SPOOLER_COMMAND</span><span class="sh">
</span><span class="no">EOF

</span><span class="c"># Restart and enable service</span>
<span class="nb">sudo </span>systemctl daemon-reload
<span class="nb">sudo </span>systemctl stop <span class="s2">&#34;</span><span class="nv">$SPOOLER_SERVICE_NAME</span><span class="s2">&#34;</span> <span class="o">||</span> <span class="nb">true
sudo </span>systemctl start <span class="s2">&#34;</span><span class="nv">$SPOOLER_SERVICE_NAME</span><span class="s2">&#34;</span>
<span class="nb">sudo </span>systemctl <span class="nb">enable</span> <span class="s2">&#34;</span><span class="nv">$SPOOLER_SERVICE_NAME</span><span class="s2">&#34;</span>

<span class="c"># ----</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure></section>
<section class="doc-section level-2"><h3 id="_the_forgejo_webhook_processor">The forgejo webhook processor</h3><p>Finally, the script that processes the JSON files does the following:</p>
<div class="olist arabic"><ol class="arabic"><li>First, it checks if the repository and branch are right,</li><li>Then, it fetches and checks out the commit referenced on the JSON file,</li><li>Once the files are updated, compiles the site using <code>hugo</code> with <code>docker
compose</code>,</li><li>If the compilation succeeds the script renames directories to swap the old
version of the site by the new one.</li></ol></div>
<p>If there is a failure the script aborts but before doing it or if the swap
succeeded the system sends an email to the configured address and/or the user
that pushed updates to the repository with a log of what happened.</p>
<p>The current script is this one:</p>
<figure class="listing-block"><figcaption>blogops-webhook.sh</figcaption>
<pre class="rouge highlight"><code data-lang="shell"><span class="c">#!/bin/sh</span>

<span class="nb">set</span> <span class="nt">-e</span>

<span class="c"># ---------</span>
<span class="c"># VARIABLES</span>
<span class="c"># ---------</span>

<span class="c"># Values</span>
<span class="nv">REPO_REF</span><span class="o">=</span><span class="s2">&#34;refs/heads/main&#34;</span>
<span class="nv">REPO_CLONE_URL</span><span class="o">=</span><span class="s2">&#34;https://forgejo.mixinet.net/mixinet/blogops.git&#34;</span>

<span class="nv">MAIL_PREFIX</span><span class="o">=</span><span class="s2">&#34;[BLOGOPS-WEBHOOK] &#34;</span>
<span class="c"># Address that gets all messages, leave it empty if not wanted</span>
<span class="nv">MAIL_TO_ADDR</span><span class="o">=</span><span class="s2">&#34;blogops@mixinet.net&#34;</span>
<span class="c"># If the following variable is set to &#39;true&#39; the pusher gets mail on failures</span>
<span class="nv">MAIL_ERRFILE</span><span class="o">=</span><span class="s2">&#34;false&#34;</span>
<span class="c"># If the following variable is set to &#39;true&#39; the pusher gets mail on success</span>
<span class="nv">MAIL_LOGFILE</span><span class="o">=</span><span class="s2">&#34;false&#34;</span>
<span class="c"># forgejo&#39;s conf/app.ini value of NO_REPLY_ADDRESS, it is used for email domains</span>
<span class="c"># when the KeepEmailPrivate option is enabled for a user</span>
<span class="nv">NO_REPLY_ADDRESS</span><span class="o">=</span><span class="s2">&#34;noreply.example.org&#34;</span>

<span class="c"># Directories</span>
<span class="nv">BASE_DIR</span><span class="o">=</span><span class="s2">&#34;/srv/blogops&#34;</span>

<span class="nv">PUBLIC_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/public&#34;</span>
<span class="nv">NGINX_BASE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/nginx&#34;</span>
<span class="nv">PUBLIC_HTML_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$NGINX_BASE_DIR</span><span class="s2">/public_html&#34;</span>

<span class="nv">WEBHOOK_BASE_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">/webhook&#34;</span>
<span class="nv">WEBHOOK_SPOOL_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_BASE_DIR</span><span class="s2">/spool&#34;</span>
<span class="nv">WEBHOOK_ACCEPTED</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">/accepted&#34;</span>
<span class="nv">WEBHOOK_DEPLOYED</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">/deployed&#34;</span>
<span class="nv">WEBHOOK_REJECTED</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">/rejected&#34;</span>
<span class="nv">WEBHOOK_TROUBLED</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">/troubled&#34;</span>
<span class="nv">WEBHOOK_LOG_DIR</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">/log&#34;</span>

<span class="c"># Files</span>
<span class="nv">TODAY</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> +%Y%m%d<span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">OUTPUT_BASENAME</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> +%Y%m%d-%H%M%S.%N<span class="si">)</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_LOGFILE_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOG_DIR</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.log&#34;</span>
<span class="nv">WEBHOOK_ACCEPTED_JSON</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.json&#34;</span>
<span class="nv">WEBHOOK_ACCEPTED_LOGF</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.log&#34;</span>
<span class="nv">WEBHOOK_REJECTED_TODAY</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED</span><span class="s2">/</span><span class="nv">$TODAY</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_REJECTED_JSON</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED_TODAY</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.json&#34;</span>
<span class="nv">WEBHOOK_REJECTED_LOGF</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED_TODAY</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.log&#34;</span>
<span class="nv">WEBHOOK_DEPLOYED_TODAY</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED</span><span class="s2">/</span><span class="nv">$TODAY</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_DEPLOYED_JSON</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_TODAY</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.json&#34;</span>
<span class="nv">WEBHOOK_DEPLOYED_LOGF</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_TODAY</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.log&#34;</span>
<span class="nv">WEBHOOK_TROUBLED_TODAY</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED</span><span class="s2">/</span><span class="nv">$TODAY</span><span class="s2">&#34;</span>
<span class="nv">WEBHOOK_TROUBLED_JSON</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_TODAY</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.json&#34;</span>
<span class="nv">WEBHOOK_TROUBLED_LOGF</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_TODAY</span><span class="s2">/</span><span class="nv">$OUTPUT_BASENAME</span><span class="s2">.log&#34;</span>

<span class="c"># Query to get variables from a forgejo webhook json</span>
<span class="nv">ENV_VARS_QUERY</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>
  <span class="nb">printf</span> <span class="s2">&#34;%s&#34;</span> <span class="se">\</span>
    <span class="s1">&#39;(.           | @sh &#34;gt_ref=\(.ref);&#34;),&#39;</span> <span class="se">\</span>
    <span class="s1">&#39;(.           | @sh &#34;gt_after=\(.after);&#34;),&#39;</span> <span class="se">\</span>
    <span class="s1">&#39;(.repository | @sh &#34;gt_repo_clone_url=\(.clone_url);&#34;),&#39;</span> <span class="se">\</span>
    <span class="s1">&#39;(.repository | @sh &#34;gt_repo_name=\(.name);&#34;),&#39;</span> <span class="se">\</span>
    <span class="s1">&#39;(.pusher     | @sh &#34;gt_pusher_full_name=\(.full_name);&#34;),&#39;</span> <span class="se">\</span>
    <span class="s1">&#39;(.pusher     | @sh &#34;gt_pusher_email=\(.email);&#34;)&#39;</span>
<span class="si">)</span><span class="s2">&#34;</span>

<span class="c"># ---------</span>
<span class="c"># Functions</span>
<span class="c"># ---------</span>

webhook_log<span class="o">()</span> <span class="o">{</span>
  <span class="nb">echo</span> <span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> <span class="nt">-R</span><span class="si">)</span><span class="s2"> </span><span class="nv">$*</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span>
<span class="o">}</span>

webhook_check_directories<span class="o">()</span> <span class="o">{</span>
  <span class="k">for </span>_d <span class="k">in</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOG_DIR</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">do</span>
    <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span>
  <span class="k">done</span>
<span class="o">}</span>

webhook_clean_directories<span class="o">()</span> <span class="o">{</span>
  <span class="c"># Try to remove empty dirs</span>
  <span class="k">for </span>_d <span class="k">in</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED</span><span class="s2">&#34;</span> <span class="se">\</span>
    <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOG_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_SPOOL_DIR</span><span class="s2">&#34;</span><span class="p">;</span> <span class="k">do
    if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
      </span><span class="nb">rmdir</span> <span class="s2">&#34;</span><span class="nv">$_d</span><span class="s2">&#34;</span> 2&gt;/dev/null <span class="o">||</span> <span class="nb">true
    </span><span class="k">fi
  done</span>
<span class="o">}</span>

webhook_accept<span class="o">()</span> <span class="o">{</span>
  webhook_log <span class="s2">&#34;Accepted: </span><span class="nv">$*</span><span class="s2">&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_JSON_INPUT_FILE</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_JSON</span><span class="s2">&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_LOGF</span><span class="s2">&#34;</span>
  <span class="nv">WEBHOOK_LOGFILE_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_LOGF</span><span class="s2">&#34;</span>
<span class="o">}</span>

webhook_reject<span class="o">()</span> <span class="o">{</span>
  <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED_TODAY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED_TODAY</span><span class="s2">&#34;</span>
  webhook_log <span class="s2">&#34;Rejected: </span><span class="nv">$*</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_JSON_INPUT_FILE</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_JSON_INPUT_FILE</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED_JSON</span><span class="s2">&#34;</span>
  <span class="k">fi
  </span><span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_REJECTED_LOGF</span><span class="s2">&#34;</span>
  <span class="nb">exit </span>0
<span class="o">}</span>

webhook_deployed<span class="o">()</span> <span class="o">{</span>
  <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_TODAY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_TODAY</span><span class="s2">&#34;</span>
  webhook_log <span class="s2">&#34;Deployed: </span><span class="nv">$*</span><span class="s2">&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_JSON</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_JSON</span><span class="s2">&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_LOGF</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_LOGF</span><span class="s2">&#34;</span>
  <span class="nv">WEBHOOK_LOGFILE_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_DEPLOYED_LOGF</span><span class="s2">&#34;</span>
<span class="o">}</span>

webhook_troubled<span class="o">()</span> <span class="o">{</span>
  <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_TODAY</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="nb">mkdir</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_TODAY</span><span class="s2">&#34;</span>
  webhook_log <span class="s2">&#34;Troubled: </span><span class="nv">$*</span><span class="s2">&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_JSON</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_JSON</span><span class="s2">&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_ACCEPTED_LOGF</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_LOGF</span><span class="s2">&#34;</span>
  <span class="nv">WEBHOOK_LOGFILE_PATH</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_TROUBLED_LOGF</span><span class="s2">&#34;</span>
<span class="o">}</span>

print_mailto<span class="o">()</span> <span class="o">{</span>
  <span class="nv">_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
  <span class="nv">_user_email</span><span class="o">=</span><span class="s2">&#34;&#34;</span>
  <span class="c"># Add the pusher email address unless it is from the domain NO_REPLY_ADDRESS,</span>
  <span class="c"># which should match the value of that variable on the forgejo &#39;app.ini&#39; (it</span>
  <span class="c"># is the domain used for emails when the user hides it).</span>
  <span class="c"># shellcheck disable=SC2154</span>
  <span class="k">if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">gt_pusher_email</span><span class="p">##*@</span><span class="s2">&#34;</span><span class="k">${</span><span class="nv">NO_REPLY_ADDRESS</span><span class="k">}</span><span class="s2">&#34;</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span>
    <span class="o">[</span> <span class="nt">-z</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">gt_pusher_email</span><span class="p">##*@*</span><span class="k">}</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">_user_email</span><span class="o">=</span><span class="s2">&#34;</span><span class="se">\&#34;</span><span class="nv">$gt_pusher_full_name</span><span class="s2"> &lt;</span><span class="nv">$gt_pusher_email</span><span class="s2">&gt;</span><span class="se">\&#34;</span><span class="s2">&#34;</span>
  <span class="k">fi
  if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$_addr</span><span class="s2">&#34;</span> <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$_user_email</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$_addr</span><span class="s2">,</span><span class="nv">$_user_email</span><span class="s2">&#34;</span>
  <span class="k">elif</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$_user_email</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$_user_email</span><span class="s2">&#34;</span>
  <span class="k">elif</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$_addr</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">&#34;</span><span class="nv">$_addr</span><span class="s2">&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span>

mail_success<span class="o">()</span> <span class="o">{</span>
  <span class="nv">to_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$MAIL_TO_ADDR</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$MAIL_LOGFILE</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="s2">&#34;true&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">to_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>print_mailto <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="k">fi
  if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
    <span class="c"># shellcheck disable=SC2154</span>
    <span class="nv">subject</span><span class="o">=</span><span class="s2">&#34;OK - </span><span class="nv">$gt_repo_name</span><span class="s2"> updated to commit &#39;</span><span class="nv">$gt_after</span><span class="s2">&#39;&#34;</span>
    mail <span class="nt">-s</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">MAIL_PREFIX</span><span class="k">}${</span><span class="nv">subject</span><span class="k">}</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> <span class="se">\</span>
      &lt;<span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span>

mail_failure<span class="o">()</span> <span class="o">{</span>
  <span class="nv">to_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$MAIL_TO_ADDR</span><span class="s2">&#34;</span>
  <span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$MAIL_ERRFILE</span><span class="s2">&#34;</span> <span class="o">=</span> <span class="nb">true</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">to_addr</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span>print_mailto <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>
  <span class="k">fi
  if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span>
    <span class="c"># shellcheck disable=SC2154</span>
    <span class="nv">subject</span><span class="o">=</span><span class="s2">&#34;KO - </span><span class="nv">$gt_repo_name</span><span class="s2"> update FAILED for commit &#39;</span><span class="nv">$gt_after</span><span class="s2">&#39;&#34;</span>
    mail <span class="nt">-s</span> <span class="s2">&#34;</span><span class="k">${</span><span class="nv">MAIL_PREFIX</span><span class="k">}${</span><span class="nv">subject</span><span class="k">}</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$to_addr</span><span class="s2">&#34;</span> <span class="se">\</span>
      &lt;<span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span>
  <span class="k">fi</span>
<span class="o">}</span>

<span class="c"># ----</span>
<span class="c"># MAIN</span>
<span class="c"># ----</span>
<span class="c"># Check directories</span>
webhook_check_directories

<span class="c"># Go to the base directory</span>
<span class="nb">cd</span> <span class="s2">&#34;</span><span class="nv">$BASE_DIR</span><span class="s2">&#34;</span>

<span class="c"># Check if the file exists</span>
<span class="nv">WEBHOOK_JSON_INPUT_FILE</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$1</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_JSON_INPUT_FILE</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_reject <span class="s2">&#34;Input arg &#39;</span><span class="nv">$1</span><span class="s2">&#39; is not a file, aborting&#34;</span>
<span class="k">fi</span>

<span class="c"># Parse the file</span>
webhook_log <span class="s2">&#34;Processing file &#39;</span><span class="nv">$WEBHOOK_JSON_INPUT_FILE</span><span class="s2">&#39;&#34;</span>
<span class="nb">eval</span> <span class="s2">&#34;</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s2">&#34;</span><span class="nv">$ENV_VARS_QUERY</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$WEBHOOK_JSON_INPUT_FILE</span><span class="s2">&#34;</span><span class="si">)</span><span class="s2">&#34;</span>

<span class="c"># Check that the repository clone url is right</span>
<span class="c"># shellcheck disable=SC2154</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$gt_repo_clone_url</span><span class="s2">&#34;</span> <span class="o">!=</span> <span class="s2">&#34;</span><span class="nv">$REPO_CLONE_URL</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_reject <span class="s2">&#34;Wrong repository: &#39;</span><span class="nv">$gt_clone_url</span><span class="s2">&#39;&#34;</span>
<span class="k">fi</span>

<span class="c"># Check that the branch is the right one</span>
<span class="c"># shellcheck disable=SC2154</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$gt_ref</span><span class="s2">&#34;</span> <span class="o">!=</span> <span class="s2">&#34;</span><span class="nv">$REPO_REF</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_reject <span class="s2">&#34;Wrong repository ref: &#39;</span><span class="nv">$gt_ref</span><span class="s2">&#39;&#34;</span>
<span class="k">fi</span>

<span class="c"># Accept the file</span>
<span class="c"># shellcheck disable=SC2154</span>
webhook_accept <span class="s2">&#34;Processing &#39;</span><span class="nv">$gt_repo_name</span><span class="s2">&#39;&#34;</span>

<span class="c"># Update the checkout</span>
<span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;0&#34;</span>
git fetch <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_troubled <span class="s2">&#34;Repository fetch failed&#34;</span>
  mail_failure
<span class="k">fi</span>
<span class="c"># shellcheck disable=SC2154</span>
git checkout <span class="s2">&#34;</span><span class="nv">$gt_after</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_troubled <span class="s2">&#34;Repository checkout failed&#34;</span>
  mail_failure
<span class="k">fi</span>

<span class="c"># Remove the build dir if present</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span><span class="nb">rm</span> <span class="nt">-rf</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span>
<span class="k">fi</span>

<span class="c"># Build site</span>
docker compose run hugo <span class="nt">--</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="c"># go back to the main branch</span>
git switch main <span class="o">&amp;&amp;</span> git pull
<span class="c"># Fail if public dir was missing</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_troubled <span class="s2">&#34;Site build failed&#34;</span>
  mail_failure
<span class="k">fi</span>

<span class="c"># Remove old public_html copies</span>
webhook_log <span class="s1">&#39;Removing old site versions, if present&#39;</span>
find <span class="nv">$NGINX_BASE_DIR</span> <span class="nt">-mindepth</span> 1 <span class="nt">-maxdepth</span> 1 <span class="nt">-name</span> <span class="s1">&#39;public_html-*&#39;</span> <span class="nt">-type</span> d <span class="se">\</span>
  <span class="nt">-exec</span> <span class="nb">rm</span> <span class="nt">-rf</span> <span class="o">{}</span> <span class="se">\;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span> <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_troubled <span class="s2">&#34;Removal of old site versions failed&#34;</span>
  mail_failure
<span class="k">fi</span>
<span class="c"># Switch site directory</span>
<span class="nv">TS</span><span class="o">=</span><span class="s2">&#34;</span><span class="si">$(</span><span class="nb">date</span> +%Y%m%d-%H%M%S<span class="si">)</span><span class="s2">&#34;</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-d</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_log <span class="s2">&#34;Moving &#39;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#39; to &#39;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">-</span><span class="nv">$TS</span><span class="s2">&#39;&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">-</span><span class="nv">$TS</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span>
    <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-eq</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_log <span class="s2">&#34;Moving &#39;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#39; to &#39;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#39;&#34;</span>
  <span class="nb">mv</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_DIR</span><span class="s2">&#34;</span> <span class="s2">&#34;</span><span class="nv">$PUBLIC_HTML_DIR</span><span class="s2">&#34;</span> <span class="o">&gt;&gt;</span><span class="s2">&#34;</span><span class="nv">$WEBHOOK_LOGFILE_PATH</span><span class="s2">&#34;</span> 2&gt;&amp;1 <span class="o">||</span>
    <span class="nv">ret</span><span class="o">=</span><span class="s2">&#34;</span><span class="nv">$?</span><span class="s2">&#34;</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="s2">&#34;</span><span class="nv">$ret</span><span class="s2">&#34;</span> <span class="nt">-ne</span> <span class="s2">&#34;0&#34;</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
  </span>webhook_troubled <span class="s2">&#34;Site switch failed&#34;</span>
  mail_failure
<span class="k">else
  </span>webhook_deployed <span class="s2">&#34;Site deployed successfully&#34;</span>
  mail_success
<span class="k">fi</span>

<span class="c"># ----</span>
<span class="c"># vim: ts=2:sw=2:et:ai:sts=2</span></code></pre></figure></section></section>
]]></content:encoded>
    </item>
    
    <item>
      <title>New Blog</title>
      <link>https://blogops.mixinet.net/posts/new_blog/</link>
      <pubDate>Mon, 23 May 2022 00:00:00 +0200</pubDate>
      
      <guid>https://blogops.mixinet.net/posts/new_blog/</guid>
      <description>&lt;p&gt;Welcome to my new Blog for Technical Stuff.&lt;/p&gt;
&lt;p&gt;For a long time I was planning to start publishing technical articles again but
to do it I wanted to replace my old &lt;a href=&#34;https://mixinet.net/~sto/blog/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;blog&lt;/a&gt;
based on &lt;a href=&#34;https://ikiwiki.info/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;ikiwiki&lt;/a&gt; by something more modern.&lt;/p&gt;
&lt;p&gt;I’ve used &lt;a href=&#34;https://jekyllrb.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Jekyll&lt;/a&gt; with &lt;a href=&#34;https://about.gitlab.com&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;GitLab&lt;/a&gt;
&lt;a href=&#34;https://docs.gitlab.com/ee/user/project/pages/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Pages&lt;/a&gt; to build the Intranet of
the &lt;a href=&#34;https://iti.es/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;ITI&lt;/a&gt; and to generate internal documentation sites on
&lt;a href=&#34;https://www.agilecontent.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Agile Content&lt;/a&gt;, but, as happened with &lt;strong&gt;ikiwiki&lt;/strong&gt;,
I felt that things were kind of slow and not as easy to maintain as I would
like.&lt;/p&gt;
&lt;p&gt;So on &lt;a href=&#34;https://kyso.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Kyso&lt;/a&gt; (the Company I work for right now) I switched to
&lt;a href=&#34;https://gohugo.io/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Hugo&lt;/a&gt; as the Static Site Generator (I still use &lt;strong&gt;GitLab
Pages&lt;/strong&gt; to automate the deployment, though), but the contents are written using
the &lt;a href=&#34;https://daringfireball.net/projects/markdown/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Markdown&lt;/a&gt; format, while my
personal preference is the &lt;a href=&#34;https://asciidoc.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Asciidoc&lt;/a&gt; format.&lt;/p&gt;
&lt;p&gt;One thing I liked about &lt;strong&gt;Jekyll&lt;/strong&gt; was that it was possible to use
&lt;a href=&#34;https://asciidoctor.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Asciidoctor&lt;/a&gt; to generate the HTML simply by using the
&lt;a href=&#34;https://github.com/asciidoctor/jekyll-asciidoc&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Jekyll Asciidoc&lt;/a&gt; plugin (I even
configured my site to generate &lt;strong&gt;PDF&lt;/strong&gt; documents from &lt;strong&gt;.adoc&lt;/strong&gt; files using the
&lt;a href=&#34;https://docs.asciidoctor.org/pdf-converter/latest/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Asciidoctor PDF converter&lt;/a&gt;)
and, luckily for me, that is also possible with &lt;strong&gt;Hugo&lt;/strong&gt;, so that is what I plan
to use on this blog, in fact this post is written in &lt;strong&gt;.adoc&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;My plan is to start publishing articles about things I’m working on to keep
them documented for myself and maybe be useful to someone else.&lt;/p&gt;
&lt;p&gt;The general intention is to write about Container Orchestration (mainly
Kubernetes), CI/CD tools (currently I’m using GitLab CE for that), System
Administration (with Debian GNU/Linux as my preferred OS) and that sort of
things.&lt;/p&gt;
&lt;p&gt;My next post will be about how I build, publish and update the Blog, but
probably I will not finish it until next week, once the site is fully
operational and the publishing system is tested.&lt;/p&gt;
&lt;section class=&#34;admonition-block caution&#34; role=&#34;doc-notice&#34;&gt;&lt;h6 class=&#34;block-title label-only&#34;&gt;&lt;span class=&#34;title-label&#34;&gt;Spoiler Alert: &lt;/span&gt;&lt;/h6&gt;&lt;p&gt;This is a personal site, so I’m using &lt;a href=&#34;https://forgejo.org/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;Forgejo&lt;/a&gt; to host the
code instead of &lt;strong&gt;GitLab&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;To handle the deployment I’ve configured
&lt;a href=&#34;https://salsa.debian.org/sto/json2file-go&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;json2file-go&lt;/a&gt; to save the data sent
by the hook calls and process it asynchronously using
&lt;a href=&#34;https://github.com/inotify-tools/inotify-tools&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;inotify-tools&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When a new file is detected a script parses the JSON file using
&lt;a href=&#34;https://stedolan.github.io/jq/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;jq&lt;/a&gt; and builds and updates the site if
appropriate.&lt;/p&gt;&lt;/section&gt;</description>
      <content:encoded><![CDATA[<p>Welcome to my new Blog for Technical Stuff.</p>
<p>For a long time I was planning to start publishing technical articles again but
to do it I wanted to replace my old <a href="https://mixinet.net/~sto/blog/" target="_blank" rel="noopener">blog</a>
based on <a href="https://ikiwiki.info/" target="_blank" rel="noopener">ikiwiki</a> by something more modern.</p>
<p>I’ve used <a href="https://jekyllrb.com/" target="_blank" rel="noopener">Jekyll</a> with <a href="https://about.gitlab.com" target="_blank" rel="noopener">GitLab</a>
<a href="https://docs.gitlab.com/ee/user/project/pages/" target="_blank" rel="noopener">Pages</a> to build the Intranet of
the <a href="https://iti.es/" target="_blank" rel="noopener">ITI</a> and to generate internal documentation sites on
<a href="https://www.agilecontent.com/" target="_blank" rel="noopener">Agile Content</a>, but, as happened with <strong>ikiwiki</strong>,
I felt that things were kind of slow and not as easy to maintain as I would
like.</p>
<p>So on <a href="https://kyso.io/" target="_blank" rel="noopener">Kyso</a> (the Company I work for right now) I switched to
<a href="https://gohugo.io/" target="_blank" rel="noopener">Hugo</a> as the Static Site Generator (I still use <strong>GitLab
Pages</strong> to automate the deployment, though), but the contents are written using
the <a href="https://daringfireball.net/projects/markdown/" target="_blank" rel="noopener">Markdown</a> format, while my
personal preference is the <a href="https://asciidoc.org/" target="_blank" rel="noopener">Asciidoc</a> format.</p>
<p>One thing I liked about <strong>Jekyll</strong> was that it was possible to use
<a href="https://asciidoctor.org/" target="_blank" rel="noopener">Asciidoctor</a> to generate the HTML simply by using the
<a href="https://github.com/asciidoctor/jekyll-asciidoc" target="_blank" rel="noopener">Jekyll Asciidoc</a> plugin (I even
configured my site to generate <strong>PDF</strong> documents from <strong>.adoc</strong> files using the
<a href="https://docs.asciidoctor.org/pdf-converter/latest/" target="_blank" rel="noopener">Asciidoctor PDF converter</a>)
and, luckily for me, that is also possible with <strong>Hugo</strong>, so that is what I plan
to use on this blog, in fact this post is written in <strong>.adoc</strong>.</p>
<p>My plan is to start publishing articles about things I’m working on to keep
them documented for myself and maybe be useful to someone else.</p>
<p>The general intention is to write about Container Orchestration (mainly
Kubernetes), CI/CD tools (currently I’m using GitLab CE for that), System
Administration (with Debian GNU/Linux as my preferred OS) and that sort of
things.</p>
<p>My next post will be about how I build, publish and update the Blog, but
probably I will not finish it until next week, once the site is fully
operational and the publishing system is tested.</p>
<section class="admonition-block caution" role="doc-notice"><h6 class="block-title label-only"><span class="title-label">Spoiler Alert: </span></h6><p>This is a personal site, so I’m using <a href="https://forgejo.org/" target="_blank" rel="noopener">Forgejo</a> to host the
code instead of <strong>GitLab</strong>.</p>
<p>To handle the deployment I’ve configured
<a href="https://salsa.debian.org/sto/json2file-go" target="_blank" rel="noopener">json2file-go</a> to save the data sent
by the hook calls and process it asynchronously using
<a href="https://github.com/inotify-tools/inotify-tools" target="_blank" rel="noopener">inotify-tools</a>.</p>
<p>When a new file is detected a script parses the JSON file using
<a href="https://stedolan.github.io/jq/" target="_blank" rel="noopener">jq</a> and builds and updates the site if
appropriate.</p></section>
]]></content:encoded>
    </item>
    
    
    
  </channel>
</rss>
