<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>technovelty</title><link href="https://www.technovelty.org/" rel="alternate"></link><link href="https://www.technovelty.org/feeds/all.atom.xml" rel="self"></link><id>https://www.technovelty.org/</id><updated>2025-04-25T21:30:00+10:00</updated><entry><title>Avoiding layer shift on Ender V3 KE after pause</title><link href="https://www.technovelty.org/hacks/avoiding-layer-shift-on-ender-v3-ke-after-pause.html" rel="alternate"></link><published>2025-04-25T21:30:00+10:00</published><updated>2025-04-25T21:30:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2025-04-25:/hacks/avoiding-layer-shift-on-ender-v3-ke-after-pause.html</id><summary type="html">&lt;p&gt;With (at least) the &lt;tt class="docutils literal"&gt;V1.1.0.15&lt;/tt&gt; firmware on the Ender V3 KE 3d
printer the &lt;tt class="docutils literal"&gt;PAUSE&lt;/tt&gt; macro will cause the print head to run too far
on the Y axis, which causes a small layer shift when the print
returns.  I guess the idea is to expose the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;With (at least) the &lt;tt class="docutils literal"&gt;V1.1.0.15&lt;/tt&gt; firmware on the Ender V3 KE 3d
printer the &lt;tt class="docutils literal"&gt;PAUSE&lt;/tt&gt; macro will cause the print head to run too far
on the Y axis, which causes a small layer shift when the print
returns.  I guess the idea is to expose the build plate as much as
possible by moving the head as far to the side and back as possible,
but the overrun and consequent belt slip unfortunately makes it mostly
useless; the main use of this probably being to switch filaments for
two colour prints.&lt;/p&gt;
&lt;p&gt;Luckily you can fairly easily enable &lt;tt class="docutils literal"&gt;root&lt;/tt&gt; access on the control
pad from the settings menu.  After doing this you can &lt;tt class="docutils literal"&gt;ssh&lt;/tt&gt; to it's
IP address with the default password &lt;tt class="docutils literal"&gt;Creality2023&lt;/tt&gt;.&lt;/p&gt;
&lt;p&gt;From there you can modify the
&lt;tt class="docutils literal"&gt;/usr/data/printer_data/config/gcode_macro.cfg&lt;/tt&gt; file (&lt;tt class="docutils literal"&gt;vi&lt;/tt&gt; is
available) to change the details of the &lt;tt class="docutils literal"&gt;PAUSE&lt;/tt&gt; macro.  Find the
section &lt;tt class="docutils literal"&gt;[gcode_macro PAUSE]&lt;/tt&gt; and modify &lt;tt class="docutils literal"&gt;{% set y_park = 255 %}&lt;/tt&gt;
to a more reasonable value like &lt;tt class="docutils literal"&gt;150&lt;/tt&gt;.  Save the file and reboot the
pad so the printing daemons restart.&lt;/p&gt;
&lt;p&gt;On &lt;tt class="docutils literal"&gt;PAUSE&lt;/tt&gt; this then moves the head to the far left about half-way
down, which works fine for filament changes.  Hopefully a future
firmware version will update this; I will update this post if I find
it does.&lt;/p&gt;
&lt;p&gt;c.f. &lt;a class="reference external" href="https://forum.creality.com/t/ender-3-v3-ke-shifting-layers-after-pause/15220"&gt;Ender 3 V3 KE shifting layers after pause&lt;/a&gt;&lt;/p&gt;
</content><category term="hacks"></category></entry><entry><title>Redirecting webfinger requests with Apache</title><link href="https://www.technovelty.org/web/redirecting-webfinger-requests-with-apache.html" rel="alternate"></link><published>2022-12-28T07:51:00+11:00</published><updated>2022-12-28T07:51:00+11:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2022-12-28:/web/redirecting-webfinger-requests-with-apache.html</id><summary type="html">&lt;p&gt;If you have a personal domain, it is nice if you can redirect
&lt;a class="reference external" href="https://www.rfc-editor.org/rfc/rfc7033"&gt;webfinger&lt;/a&gt; requests so you
can be easily found via your email.  This is hardly a new idea, but
the growth of &lt;a class="reference external" href="https://mastodon.social/explore"&gt;Mastodon&lt;/a&gt; recently
has made this more prominent.&lt;/p&gt;
&lt;p&gt;I wanted to redirect webfinger endpoints to a Mastondon …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you have a personal domain, it is nice if you can redirect
&lt;a class="reference external" href="https://www.rfc-editor.org/rfc/rfc7033"&gt;webfinger&lt;/a&gt; requests so you
can be easily found via your email.  This is hardly a new idea, but
the growth of &lt;a class="reference external" href="https://mastodon.social/explore"&gt;Mastodon&lt;/a&gt; recently
has made this more prominent.&lt;/p&gt;
&lt;p&gt;I wanted to redirect webfinger endpoints to a Mastondon host I am
using, but only my email and only standard Apache rewrites.  Below,
replace &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;xxx&amp;#64;yyy\.com&lt;/span&gt;&lt;/tt&gt; with your email and &lt;tt class="docutils literal"&gt;zzz.social&lt;/tt&gt; with the
account to be redirected to.  There are a couple of tricks in being
able to inspect the query-string and quoting, but the end result that
works for me is&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="nb"&gt;RewriteEngine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;On&lt;/span&gt;
&lt;span class="nb"&gt;RewriteMap&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;lc&lt;span class="w"&gt; &lt;/span&gt;int:tolower
&lt;span class="nb"&gt;RewriteMap&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;unescape&lt;span class="w"&gt; &lt;/span&gt;int:unescape

&lt;span class="nb"&gt;RewriteCond&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;%{REQUEST_URI}&lt;span class="w"&gt; &lt;/span&gt;^/\.well-known/webfinger$
&lt;span class="nb"&gt;RewriteCond&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;${lc:${unescape:%{QUERY_STRING}}}&lt;span class="w"&gt; &lt;/span&gt;(?:^|&amp;amp;)resource=acct:xxx@yyy\.com(?:$|&amp;amp;)
&lt;span class="nb"&gt;RewriteRule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;^(.*)$&lt;span class="w"&gt; &lt;/span&gt;https://zzz.social/.well-known/webfinger?resource=acct:xxx@zzz.social&lt;span class="w"&gt; &lt;/span&gt;[L,R=302]

&lt;span class="nb"&gt;RewriteCond&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;%{REQUEST_URI}&lt;span class="w"&gt; &lt;/span&gt;^/\.well-known/host-meta$
&lt;span class="nb"&gt;RewriteCond&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;${lc:${unescape:%{QUERY_STRING}}}&lt;span class="w"&gt; &lt;/span&gt;(?:^|&amp;amp;)resource=acct:xxx@yyy\.com(?:$|&amp;amp;)
&lt;span class="nb"&gt;RewriteRule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;^(.*)$&lt;span class="w"&gt; &lt;/span&gt;https://zzz.social/.well-known/host-meta?resource=acct:xxx@zzz.social&lt;span class="w"&gt; &lt;/span&gt;[L,R=302]

&lt;span class="nb"&gt;RewriteCond&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;%{REQUEST_URI}&lt;span class="w"&gt; &lt;/span&gt;^/\.well-known/nodeinfo$
&lt;span class="nb"&gt;RewriteCond&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;${lc:${unescape:%{QUERY_STRING}}}&lt;span class="w"&gt; &lt;/span&gt;(?:^|&amp;amp;)resource=acct:xxx@yyy\.org(?:$|&amp;amp;)
&lt;span class="nb"&gt;RewriteRule&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;^(.*)$&lt;span class="w"&gt; &lt;/span&gt;https://zzz.social/.well-known/nodeinfo?resource=acct:xxx@zzz.social&lt;span class="w"&gt; &lt;/span&gt;[L,R=302]
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;c.f. &lt;a class="reference external" href="https://blog.bofh.it/debian/id_464"&gt;https://blog.bofh.it/debian/id_464&lt;/a&gt;&lt;/p&gt;
</content><category term="web"></category></entry><entry><title>nutdrv_qx setup for Synology DSM7</title><link href="https://www.technovelty.org/hacks/nutdrv_qx-setup-for-synology-dsm7.html" rel="alternate"></link><published>2021-08-09T19:30:00+10:00</published><updated>2021-08-09T19:30:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2021-08-09:/hacks/nutdrv_qx-setup-for-synology-dsm7.html</id><summary type="html">&lt;p&gt;I have a cheap no-name UPS acquired from Jaycar and was wondering if I
could get it to connect to my Synology DS918+.  It rather unhelpfully
identifies itself as &lt;tt class="docutils literal"&gt;MEC0003&lt;/tt&gt; and comes with some blob of
non-working software on a CD; however some investigation found it
could maybe work on …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I have a cheap no-name UPS acquired from Jaycar and was wondering if I
could get it to connect to my Synology DS918+.  It rather unhelpfully
identifies itself as &lt;tt class="docutils literal"&gt;MEC0003&lt;/tt&gt; and comes with some blob of
non-working software on a CD; however some investigation found it
could maybe work on my Synology NAS using the &lt;a class="reference external" href="https://networkupstools.org/"&gt;Network UPS Tools&lt;/a&gt; &lt;tt class="docutils literal"&gt;nutdrv_qx&lt;/tt&gt; driver with the
&lt;tt class="docutils literal"&gt;hunnox&lt;/tt&gt; subdriver type.&lt;/p&gt;
&lt;p&gt;Unfortunately this is a fairly recent addition to the NUTs source,
requiring rebuilding the driver for DSM7.  I don't fully understand
the Synology environment but I did get this working.  Firstly I
downloaded the toolchain from
&lt;a class="reference external" href="https://archive.synology.com/download/ToolChain/toolchain/"&gt;https://archive.synology.com/download/ToolChain/toolchain/&lt;/a&gt; and
extracted it.  I then used the script from
&lt;a class="reference external" href="https://github.com/SynologyOpenSource/pkgscripts-ng"&gt;https://github.com/SynologyOpenSource/pkgscripts-ng&lt;/a&gt; to download
some sort of build environment.  This appears to want root access and
possibly sets up some sort of chroot.  Anyway, for DSM7 on the DS918+
I ran &lt;tt class="docutils literal"&gt;EnvDeploy &lt;span class="pre"&gt;-v&lt;/span&gt; 7.0 &lt;span class="pre"&gt;-p&lt;/span&gt; apollolake&lt;/tt&gt; and it downloaded some
tarballs into &lt;tt class="docutils literal"&gt;toolkit_tarballs&lt;/tt&gt; that I simply extracted into the
same directory as the toolchain.&lt;/p&gt;
&lt;p&gt;I then grabbed the NUTs source from
&lt;a class="reference external" href="https://github.com/networkupstools/nut"&gt;https://github.com/networkupstools/nut&lt;/a&gt;.  I then built NUTS
similar to the following&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;./autogen.sh
&lt;span class="nv"&gt;PATH_TO_TC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/your/path
&lt;span class="nb"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;CC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PATH_TO_CC&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-gcc
&lt;span class="nb"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;LD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PATH_TO_LD&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-ld

./configure&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;--prefix&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;--with-statepath&lt;span class="o"&gt;=&lt;/span&gt;/var/run/ups_state&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;--sysconfdir&lt;span class="o"&gt;=&lt;/span&gt;/etc/ups&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;--with-sysroot&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PATH_TO_TC&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;/usr/local/sysroot&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;--with-usb&lt;span class="o"&gt;=&lt;/span&gt;yes
&lt;span class="w"&gt;  &lt;/span&gt;--with-usb-libs&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;-L&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PATH_TO_TC&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/x86_64-pc-linux-gnu/x86_64-pc-linux-gnu/sys-root/usr/lib/ -lusb&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;--with-usb-includes&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;-I&lt;/span&gt;&lt;span class="si"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PATH_TO_TC&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/usr/local/sysroot/usr/include/&amp;quot;&lt;/span&gt;

make
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The tricks to be aware of are setting the locations DSM wants
status/config files and overriding the USB detection done by
&lt;tt class="docutils literal"&gt;configure&lt;/tt&gt; which doesn't seem to obey sysroot.&lt;/p&gt;
&lt;p&gt;If you would prefer to avoid this you can try this prebuilt &lt;a class="reference external" href="https://technovelty.org/files/nutdrv_qx_DSM7.0-41890.gz"&gt;nutdrv_qx&lt;/a&gt;
(&lt;tt class="docutils literal"&gt;ebb184505abd1ca1750e13bb9c5f991eaa999cbea95da94b20f66ae4bd02db41&lt;/tt&gt;).&lt;/p&gt;
&lt;p&gt;SSH to the DSM7 machine; as root move &lt;tt class="docutils literal"&gt;/usr/bin/nutdrv_qx&lt;/tt&gt; out of
the way to save it; scp the new version and move it into place.&lt;/p&gt;
&lt;p&gt;If you &lt;tt class="docutils literal"&gt;cat /dev/bus/usb/devices&lt;/tt&gt; I found this device has a &lt;tt class="docutils literal"&gt;Vendor
0001&lt;/tt&gt; and &lt;tt class="docutils literal"&gt;ProdID 0000&lt;/tt&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;T:  Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#=  3 Spd=1.5  MxCh= 0
D:  Ver= 2.00 Cls=00(&amp;gt;ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs=  1
P:  Vendor=0001 ProdID=0000 Rev= 1.00
S:  Product=MEC0003
S:  SerialNumber=ffffff87ffffffb7ffffff87ffffffb7
C:* #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=100mA
I:* If#= 0 Alt= 0 #EPs= 2 Cls=03(HID  ) Sub=00 Prot=00 Driver=usbfs
E:  Ad=81(I) Atr=03(Int.) MxPS=   8 Ivl=10ms
E:  Ad=02(O) Atr=03(Int.) MxPS=   8 Ivl=10ms
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;DSM does a bunch of magic to autodetect and configure NUTs when a UPS
is plugged in.  The first thing you'll need to do is edit
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;/etc/nutscan-usb.sh&lt;/span&gt;&lt;/tt&gt; and override where it tries to use the
&lt;tt class="docutils literal"&gt;blazer_usb&lt;/tt&gt; driver for this obviously incorrect vendor/product id.
The line should now look like&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="k"&gt;static&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;usb_device_id_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;usb_device_table&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mh"&gt;0x0001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mh"&gt;0x0000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;nutdrv_qx&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mh"&gt;0x03f0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mh"&gt;0x0001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;usbhid-ups&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;so&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Then you want to edit the file
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;/usr/syno/lib/systemd/scripts/ups-usb.sh&lt;/span&gt;&lt;/tt&gt; to start the
&lt;tt class="docutils literal"&gt;nutdrv_qx&lt;/tt&gt;; find the &lt;tt class="docutils literal"&gt;DRV_LIST&lt;/tt&gt; in that file and update it like
so:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;local DRV_LIST=&amp;quot;nutdrv_qx usbhid-ups blazer_usb bcmxcp_usb richcomm_usb tripplite_usb&amp;quot;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This is triggered by &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;/usr/lib/systemd/system/ups-usb.service&lt;/span&gt;&lt;/tt&gt; and
is ultimately what tries to setup the UPS configuration.&lt;/p&gt;
&lt;p&gt;Lastly, you will need to edit the &lt;tt class="docutils literal"&gt;/etc/ups/ups.conf&lt;/tt&gt; file.  This
will probably vary depending on your UPS.  One important thing is to
add &lt;tt class="docutils literal"&gt;user=root&lt;/tt&gt; above the driver; it seems recent NUT has become
more secure and drops permissions, but the result it will not find USB
devices in this environment (if you're getting something like &lt;tt class="docutils literal"&gt;no
appropriate HID device found&lt;/tt&gt; this is likely the cause).  So the
configuration should look something like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;user=root

[ups]
driver = nutdrv_qx
port = auto
subdriver = hunnox
vendorid = &amp;quot;0001&amp;quot;
productid = &amp;quot;0000&amp;quot;
langid_fix = 0x0409
novendor
noscanlangid
#pollonly
#community =
#snmp_version = v2c
#mibs =
#secName =
#secLevel =
#authProtocol =
#authPassword =
#privProtocol =
#privPassword =
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;I then restarted the UPS daemon by enabling/disabling UPS support in
the UI.  This should tell you that your UPS is connected.  You can
also check &lt;tt class="docutils literal"&gt;/var/log/ups.log&lt;/tt&gt; which shows for me&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;2021-08-09T18:14:51+10:00 synology synoups[11994]: =====log UPS status start=====
2021-08-09T18:14:51+10:00 synology synoups[11996]: device.mfr=
2021-08-09T18:14:51+10:00 synology synoups[11998]: device.model=
2021-08-09T18:14:51+10:00 synology synoups[12000]: battery.charge=
2021-08-09T18:14:51+10:00 synology synoups[12002]: battery.runtime=
2021-08-09T18:14:51+10:00 synology synoups[12004]: battery.voltage=13.80
2021-08-09T18:14:51+10:00 synology synoups[12006]: input.voltage=232.0
2021-08-09T18:14:51+10:00 synology synoups[12008]: output.voltage=232.0
2021-08-09T18:14:51+10:00 synology synoups[12010]: ups.load=31
2021-08-09T18:14:51+10:00 synology synoups[12012]: ups.status=OL
2021-08-09T18:14:51+10:00 synology synoups[12013]: =====log UPS status end=====
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Which corresponds to the correct input/output voltage and state.&lt;/p&gt;
&lt;p&gt;Of course this is all unsupported and probably likely to break --
although I don't imagine much of these bits are updated very
frequently.  It will likely be OK until the UPS battery dies; at which
point I would reccommend buying a better UPS on the Synology support
list.&lt;/p&gt;
</content><category term="hacks"></category></entry><entry><title>Lyte Portable Projector Investigation</title><link href="https://www.technovelty.org/toys/lyte-portable-projector-investigation.html" rel="alternate"></link><published>2021-08-05T11:00:00+10:00</published><updated>2021-08-05T11:00:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2021-08-05:/toys/lyte-portable-projector-investigation.html</id><summary type="html">&lt;p&gt;I recently picked up this portable projector for a reasonable price.
It might also be called a &amp;quot;M5&amp;quot; projector, but I can not find one
canonical source.  In terms of projection, it performs as well as a
5cm cube could be expected to.  They made a poor choice to eschew …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I recently picked up this portable projector for a reasonable price.
It might also be called a &amp;quot;M5&amp;quot; projector, but I can not find one
canonical source.  In terms of projection, it performs as well as a
5cm cube could be expected to.  They made a poor choice to eschew
adding an external video input which severely limits the device's
usefulness.&lt;/p&gt;
&lt;p&gt;The design is nice and getting into it is quite an effort.  There is
no wasted space!  After pulling off the rubber top covering and base,
you have to pry the decorative metal shielding off all sides to access
the screws to open it.  This almost unavoidably bends it so it will
never quite be the same.  To avoid you having to bother, some photos:&lt;/p&gt;
&lt;a data-flickr-embed="true" href="https://www.flickr.com/photos/iwienand/albums/72157719636178008" title="Lyte Projector"&gt;&lt;img src="https://live.staticflickr.com/65535/51357179027_591427f048.jpg" width="640" height="480" alt="Lyte Projector"&gt;&lt;/a&gt;&lt;script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"&gt;&lt;/script&gt;&lt;p&gt;It is fairly locked down.  I found a couple of ways in; installing the
Disney+ app from the &amp;quot;Aptoide TV&amp;quot; store it ships with does not work,
but the app prompts you to update it, which sends you to an action
where you can then choose to open the Google Play store.  From there,
you can install things that work on it's Android 7 OS.  This allowed
me to install a system-viewer app which revealed its specs:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;Android 7.1.2&lt;/li&gt;
&lt;li&gt;Build NHG47K&lt;/li&gt;
&lt;li&gt;1280x720 px&lt;/li&gt;
&lt;li&gt;4 Core ARMv7 rev 5 (v71) 1200Mhz&lt;/li&gt;
&lt;li&gt;Rockchip RK3128&lt;/li&gt;
&lt;li&gt;1GB RAM&lt;/li&gt;
&lt;li&gt;4.8GB Storage&lt;/li&gt;
&lt;li&gt;9000mAh (marked) batteries&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Another weird thing I found was that if you go into the custom
launcher &amp;quot;About&amp;quot; page under settings and keep clicking the &amp;quot;OK&amp;quot; button
on the version number, it will open the standard Android settings
page.  From there you can enable developer options.  I could not get
it connecting to ADB, although you perhaps need a USB OTG cable which
I didn't have.&lt;/p&gt;
&lt;p&gt;It has some sort of built-in Miracast app that I could not get
anything to detect.  It doesn't have the native Google app store; most
of the apps in the provided system don't work.  Somehow it runs
Netflix via a webview or which is hard to use.&lt;/p&gt;
&lt;p&gt;If it had HDMI input it would still be a useful little thing to plug
things into.  You could perhaps sideload some sort of apps to get the
screensharing working, or it plays media files off a USB stick or
network shares.  I don't believe there is any practical way to get a
more recent Android on this, leaving it on an accelerated path to
e-waste for all but the most boutique users.&lt;/p&gt;
</content><category term="toys"></category></entry><entry><title>Local qemu/kvm virtual machines, 2018</title><link href="https://www.technovelty.org/linux/local-qemukvm-virtual-machines-2018.html" rel="alternate"></link><published>2018-07-27T13:08:00+10:00</published><updated>2018-07-27T13:08:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2018-07-27:/linux/local-qemukvm-virtual-machines-2018.html</id><summary type="html">&lt;p&gt;For work I run a personal and a work VM on my laptop.  When I was at
VMware I dogfooded internal builds of Workstation which worked well,
but was always a challenge to have its additions consistently building
against latest kernels.  About 5 and half years ago, the only
practical …&lt;/p&gt;</summary><content type="html">&lt;p&gt;For work I run a personal and a work VM on my laptop.  When I was at
VMware I dogfooded internal builds of Workstation which worked well,
but was always a challenge to have its additions consistently building
against latest kernels.  About 5 and half years ago, the only
practical alternative option was VirtualBox. IIRC &lt;a class="reference external" href="www.linux-kvm.org.org"&gt;SPICE&lt;/a&gt; maybe didn't even exist or was very early,
and while VNC is OK to fiddle with something, completely impractical
for primary daily use.&lt;/p&gt;
&lt;p&gt;VirtualBox is fine, but there is the promised land of all the great
features of qemu/kvm and many recent improvements in 3D integration
always calling.  I'm trying all this on my Fedora 28 host, with a
Fedora 28 guest (which has been in-place upgraded since Fedora 19), so
everything is pretty recent.  Periodically I try this conversion
again, but, spoiler alert, have not yet managed to get things quite
right.&lt;/p&gt;
&lt;p&gt;As I happened to close an IRC window, somehow my client seemed to
crash X11.  How odd ... so I thought, everything has just disappeared
anyway; I might as well try switching again.&lt;/p&gt;
&lt;p&gt;Image conversion has become much easier.  My primary VM has a number
of snapshots, so I used the VirtualBox GUI to clone the VM and
followed the prompts to create the clone with squashed snapshots.
Then simply convert the VDI to a RAW image with&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;$&lt;span class="w"&gt; &lt;/span&gt;qemu-img&lt;span class="w"&gt; &lt;/span&gt;convert&lt;span class="w"&gt; &lt;/span&gt;-p&lt;span class="w"&gt; &lt;/span&gt;-f&lt;span class="w"&gt; &lt;/span&gt;vdi&lt;span class="w"&gt; &lt;/span&gt;-O&lt;span class="w"&gt; &lt;/span&gt;raw&lt;span class="w"&gt; &lt;/span&gt;image.vdi&lt;span class="w"&gt; &lt;/span&gt;image.raw
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Note if you forget the progress meter, send the pid a &lt;tt class="docutils literal"&gt;SIGUSR1&lt;/tt&gt; to
get it to spit out a progress.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="https://virt-manager.org/"&gt;virt-manager&lt;/a&gt; has come a long way too.
Creating a new VM was trivial.  I wanted to make sure I was using all
the latest SPICE gl etc., stuff.  Here I hit some problems with what
seemed to be permission denials on &lt;tt class="docutils literal"&gt;drm&lt;/tt&gt; devices before even getting
the machine started.  Something suggested using libvirt in session
mode, with the &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;qemu:///session&lt;/span&gt;&lt;/tt&gt; URL -- which seemed more like what
I want anyway (a VM for only my user).  I tried that, put the
converted raw image in my home directory and the VM would boot.  Yay!&lt;/p&gt;
&lt;p&gt;It was a bit much to expect it to work straight away; while GRUB did
start, it couldn't find the root disks.  In hindsight, you should
probably generate a non-host specific &lt;tt class="docutils literal"&gt;initramfs&lt;/tt&gt; before converting
the disk, so that it has a larger selection of drivers to find the
boot devices (especially the modern virtio drivers).  On Fedora that
would be something like&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;sudo&lt;span class="w"&gt; &lt;/span&gt;dracut&lt;span class="w"&gt; &lt;/span&gt;--no-hostonly&lt;span class="w"&gt; &lt;/span&gt;--regenerate-all&lt;span class="w"&gt; &lt;/span&gt;-f
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;As it turned out, I &amp;quot;simply&amp;quot; attached a live-cd and booted into that,
then chrooted into my old VM and regenerated the &lt;tt class="docutils literal"&gt;initramfs&lt;/tt&gt; for the
latest kernel manually.  After this the system could find the LVM
volumes in the image and would boot.&lt;/p&gt;
&lt;p&gt;After a fiddly start, I was hopeful.  The guest kernel &lt;tt class="docutils literal"&gt;dmesg&lt;/tt&gt; DRM
sections showed everything was looking good for 3D support, along with
the &lt;tt class="docutils literal"&gt;glxinfo&lt;/tt&gt; showing all the &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;virtio-gpu&lt;/span&gt;&lt;/tt&gt; stuff looking correct.
However, I could not get what I hoped was trivial automatic window
resizing happening no matter what.  After a bunch of searching,
ensuring my agents were running correctly, etc. it turns out that has
to be implemented by the window-manager now, and it is not supported
by my preferred XFCE (see
&lt;a class="reference external" href="https://bugzilla.redhat.com/show_bug.cgi?id=1290586"&gt;https://bugzilla.redhat.com/show_bug.cgi?id=1290586&lt;/a&gt;).  Note you
can do this manually with &lt;tt class="docutils literal"&gt;xrandr &lt;span class="pre"&gt;--output&lt;/span&gt; &lt;span class="pre"&gt;Virtual-1&lt;/span&gt; &lt;span class="pre"&gt;--auto&lt;/span&gt;&lt;/tt&gt; to get
it to resize, but that's rather annoying.&lt;/p&gt;
&lt;p&gt;I thought that it is 2018 and I could live with Gnome, so installed
that.  Then I tried to ping something, and got another selinux denial
(on the host) from &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;qemu-system-x86&lt;/span&gt;&lt;/tt&gt; creating &lt;tt class="docutils literal"&gt;icmp_socket&lt;/tt&gt;.  I am
guessing this has to do with the interaction between libvirt session
mode and the usermode networking device (filed
&lt;a class="reference external" href="https://bugzilla.redhat.com/show_bug.cgi?id=1609142"&gt;https://bugzilla.redhat.com/show_bug.cgi?id=1609142&lt;/a&gt;).  I figured
I'd limp along with ICMP and look into details later...&lt;/p&gt;
&lt;p&gt;Finally when I moved the window to my portrait-mode external monitor,
the SPICE window expanded but the internal VM resolution would not
expand to the full height.  It looked like it was taking the height
from the portrait-orientation width.&lt;/p&gt;
&lt;p&gt;Unfortunately, forced swapping of environments and still having
two/three non-trivial bugs to investigate exceeded my practical time
to fiddle around with all this.  I'll stick with VirtualBox for a
little longer; 2020 might be the year!&lt;/p&gt;
</content><category term="linux"></category></entry><entry><title>uwsgi; oh my!</title><link href="https://www.technovelty.org/openstack/uwsgi-oh-my.html" rel="alternate"></link><published>2018-07-09T09:35:00+10:00</published><updated>2018-07-09T09:35:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2018-07-09:/openstack/uwsgi-oh-my.html</id><summary type="html">&lt;p&gt;The world of Python based web applications, WSGI, its interaction
with &lt;cite&gt;uwsgi&lt;/cite&gt; and various deployment methods can quickly turn into a
incredible array of confusingly named acronym soup.  If you jump
straight into the &lt;a class="reference external" href="https://uwsgi-docs.readthedocs.io/en/latest/"&gt;uwsgi documentation&lt;/a&gt; it is almost certain
you will get lost before you start!&lt;/p&gt;
&lt;p&gt;Below tries to …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The world of Python based web applications, WSGI, its interaction
with &lt;cite&gt;uwsgi&lt;/cite&gt; and various deployment methods can quickly turn into a
incredible array of confusingly named acronym soup.  If you jump
straight into the &lt;a class="reference external" href="https://uwsgi-docs.readthedocs.io/en/latest/"&gt;uwsgi documentation&lt;/a&gt; it is almost certain
you will get lost before you start!&lt;/p&gt;
&lt;p&gt;Below tries to lay out a primer for the foundations of application
deployment within &lt;a class="reference external" href="https://devstack.org"&gt;devstack&lt;/a&gt;; a tool for
creating a self-contained OpenStack environment for testing and
interactive development.  However, it is hopefully of more general
interest for those new to some of these concepts too.&lt;/p&gt;
&lt;div class="section" id="wsgi"&gt;
&lt;h2&gt;WSGI&lt;/h2&gt;
&lt;p&gt;Let's start with WSGI.  Fully described in &lt;a class="reference external" href="https://www.python.org/dev/peps/pep-0333/"&gt;PEP 333 -- Python Web
Server Gateway Interface&lt;/a&gt; the core concept a
standardised way for a Python program to be called in response to a
web request.  In essence, it bundles the parameters from the incoming
request into known objects, and gives you can object to put data into
that will get back to the requesting client.  The &amp;quot;simplest
application&amp;quot;, taken from the PEP directly below, highlights this
perfectly:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;simple_app&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start_response&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="sd"&gt;&amp;quot;&amp;quot;&amp;quot;Simplest possible application object&amp;quot;&amp;quot;&amp;quot;&lt;/span&gt;
     &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;200 OK&amp;#39;&lt;/span&gt;
     &lt;span class="n"&gt;response_headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Content-type&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;text/plain&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
     &lt;span class="n"&gt;start_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response_headers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Hello world!&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You can start building frameworks on top of this, but yet maintain
broad interoperability as you build your application.  There is plenty
more to it, but that's all you need to follow for now.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="using-wsgi"&gt;
&lt;h2&gt;Using WSGI&lt;/h2&gt;
&lt;p&gt;Your WSGI based application needs to get a request from somewhere.
We'll refer to the diagram below for discussions of how WSGI based
applications can be deployed.&lt;/p&gt;
&lt;img alt="Overview of some WSGI deployment methods" class="img-responsive" src="https://www.technovelty.org/images/uwsgi.png" /&gt;
&lt;p&gt;In general, this is illustrating how an API end-point
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;http://service.com/api/&lt;/span&gt;&lt;/tt&gt; might be connected together to an underlying
WSGI implementation written in Python (&lt;tt class="docutils literal"&gt;web_app.py&lt;/tt&gt;).  Of course,
there are going to be layers and frameworks and libraries and heavens
knows what else in any real deployment.  We're just concentrating on
Apache integration -- the client request hits Apache first and then
gets handled as described below.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="cgi"&gt;
&lt;h2&gt;CGI&lt;/h2&gt;
&lt;p&gt;Starting with &lt;tt class="docutils literal"&gt;1&lt;/tt&gt; in the diagram above, we see CGI or &amp;quot;Common
Gateway Interface&amp;quot;.  This is the oldest and most generic method of a
web server calling an external application in response to an incoming
request.  The details of the request are put into environment
variables and whatever process is configured to respond to that URL is
&lt;tt class="docutils literal"&gt;fork()&lt;/tt&gt; -ed.  In essence, whatever comes back from &lt;tt class="docutils literal"&gt;stdout&lt;/tt&gt; is sent
back to the client and then the process is killed.  The next request
comes in and it starts all over again.&lt;/p&gt;
&lt;p&gt;This can certainly be done with WSGI; above we illustrate that you'd
have a framework layer that would translate the environment variables
into the python &lt;tt class="docutils literal"&gt;environ&lt;/tt&gt; object and connect up the processes output
to gather the response.&lt;/p&gt;
&lt;p&gt;The advantage of CGI is that it is the lowest common denominator of
&amp;quot;call this when a request comes in&amp;quot;.  It works with anything you can
&lt;tt class="docutils literal"&gt;exec&lt;/tt&gt;, from shell scripts to compiled binaries.  However, forking
processes is expensive, and parsing the environment variables involves
a lot of fiddly string processing.  These become issues as you scale.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="modules"&gt;
&lt;h2&gt;Modules&lt;/h2&gt;
&lt;p&gt;Illustrated by &lt;tt class="docutils literal"&gt;2&lt;/tt&gt; above, it is possible to embed a Python
interpreter directly into the web server and call the application from
there.  This is broadly how &lt;tt class="docutils literal"&gt;mod_python&lt;/tt&gt;, &lt;tt class="docutils literal"&gt;mod_wsgi&lt;/tt&gt; and
&lt;tt class="docutils literal"&gt;mod_uwsgi&lt;/tt&gt; all work.&lt;/p&gt;
&lt;p&gt;The overheads of marshaling arguments into strings via environment
variables, then unmarshaling them back to Python objects can be
removed in this model.  The web server handles the tricky parts of
communicating with the remote client, and the module &amp;quot;just&amp;quot; needs to
translate the internal structures of the request and response into the
Python WSGI representation.  The web server can manage the response
handlers directly leading to further opportunities for performance
optimisations (more persistent state, etc.).&lt;/p&gt;
&lt;p&gt;The problem with this model is that your web server becomes part of
your application.  This may sound a bit silly -- of course if the web
server doesn't take client requests nothing works.  However, there are
several situations where (as usual in computer science) a layer of
abstraction can be of benefit.  Being part of the web server means you
have to write to its APIs and, in general, its view of the world.  For
example, &lt;tt class="docutils literal"&gt;mod_uwsgi&lt;/tt&gt; documentation says&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;This is the original module.  It is solid, but incredibly ugly and
does not follow a lot of apache coding convention style&amp;quot;.&lt;/p&gt;
&lt;p class="attribution"&gt;&amp;mdash;&lt;a class="reference external" href="https://uwsgi-docs.readthedocs.io/en/latest/Apache.html"&gt;uwsgi&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a class="reference external" href="http://modpython.org/"&gt;mod_python&lt;/a&gt; is deprecated with &lt;a class="reference external" href="https://modwsgi.readthedocs.io/en/develop/"&gt;mod_wsgi&lt;/a&gt; as the replacement.
These are obviously tied very closely to internal Apache concepts.&lt;/p&gt;
&lt;p&gt;In production environments, you need things like load-balancing,
high-availability and caching that all need to integrate into this
model.  Thus you will have to additionally ensure these various layers
all integrate directly with your web server.&lt;/p&gt;
&lt;p&gt;Since your application &lt;em&gt;is&lt;/em&gt; the web server, any time you make small
changes you essentially need to manage the whole web server; often
with a complete restart.  Devstack is a great example of this; where
you have 5-6 different WSGI-based services running to simulate your
OpenStack environment (compute service, network service, image
service, block storage, etc) but you are only working on one component
which you wish to iterate quickly on.  Stopping everything to update
one component can be tricky in both production and development.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="uwsgi"&gt;
&lt;h2&gt;uwsgi&lt;/h2&gt;
&lt;p&gt;Which brings us to &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; (I call this &amp;quot;micro-wsgi&amp;quot; but I don't know
if it actually intended to be a μ).  &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; is a real Swiss Army
knife, and can be used in contexts that don't have to do with Python
or WSGI -- which I believe is why you can get quite confused if you
just start looking at it in isolation.&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; lets us combine some of the advantages of being part of the
web server with the advantages of abstraction.  &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; is a
complete pluggable network daemon framework, but we'll just discuss it
in one context illustrated by &lt;tt class="docutils literal"&gt;3&lt;/tt&gt;.&lt;/p&gt;
&lt;p&gt;In this model, the WSGI application runs separately to the webserver
within the embedded python interpreter provided by the &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt;
daemon.  &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; is, in parts, a web-server -- as illustrated it can
talk HTTP directly if you want it to, which can be exposed directly or
via a traditional proxy.&lt;/p&gt;
&lt;p&gt;By using the proxy extension &lt;tt class="docutils literal"&gt;mod_proxy_uwsgi&lt;/tt&gt; we can have the
advantage of being &amp;quot;inside&amp;quot; Apache and forwarding the requests via a
lightweight binary channel to the application back end.  In this
model, &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; provides a &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;uwsgi://&lt;/span&gt;&lt;/tt&gt; service using its internal
&lt;a class="reference external" href="https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html"&gt;protcol&lt;/a&gt; on a
private port.  The proxy module marshals the request into small
packets and forwards it to the given port.  &lt;tt class="docutils literal"&gt;uswgi&lt;/tt&gt; takes the
incoming request, quickly unmarshals it and feeds it into the WSGI
application running inside.  Data is sent back via similarly fast
channels as the response (note you can equally use file based Unix
sockets for local only communication).&lt;/p&gt;
&lt;p&gt;Now your application has a level of abstraction to your front end.  At
one extreme, you could swap out Apache for some other web server
completely and feed in requests just the same.  Or you can have Apache
start to load-balance out requests to different backend handlers
transparently.&lt;/p&gt;
&lt;p&gt;The model works very well for multiple applications living in the same
name-space.  For example, in the Devstack context, it's easy with
&lt;tt class="docutils literal"&gt;mod_proxy&lt;/tt&gt; to have Apache doing URL matching and separate out each
incoming request to its appropriate back end service; e.g.&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;http://service/identity&lt;/span&gt;&lt;/tt&gt; gets routed to Keystone running at &lt;tt class="docutils literal"&gt;localhost:40000&lt;/tt&gt;&lt;/li&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;http://service/compute&lt;/span&gt;&lt;/tt&gt;  gets sent to Nova at &lt;tt class="docutils literal"&gt;localhost:40001&lt;/tt&gt;&lt;/li&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;http://service/image&lt;/span&gt;&lt;/tt&gt; gets sent to glance at &lt;tt class="docutils literal"&gt;localhost:40002&lt;/tt&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and so on (you can see how this is exactly configured in
&lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/apache"&gt;lib/apache:write_uwsgi_config&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;When a developer makes a change they simply need to restart one
particular &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; instance with their change and the unified
front-end remains untouched.  In Devstack (as illustrated) the
&lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; processes are further wrapped into &lt;tt class="docutils literal"&gt;systemd&lt;/tt&gt; services
which facilitates easy life-cycle and log management.  Of course you
can imagine you start getting containers involved, then container
orchestrators, then clouds-on-clouds ...&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="conclusion"&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;There's no right or wrong way to deploy complex web applications.  But
using an Apache front end, proxying requests via fast channels to
isolated &lt;tt class="docutils literal"&gt;uwsgi&lt;/tt&gt; processes running individual WSGI-based
applications can provide both good performance and implementation
flexibility.&lt;/p&gt;
&lt;/div&gt;
</content><category term="openstack"></category></entry><entry><title>Thunderbird 54 external editor</title><link href="https://www.technovelty.org/linux/thunderbird-54-external-editor.html" rel="alternate"></link><published>2017-03-13T12:30:00+11:00</published><updated>2017-03-13T12:30:00+11:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2017-03-13:/linux/thunderbird-54-external-editor.html</id><summary type="html">&lt;p&gt;For many years I've used Thunderbird with &lt;a class="reference external" href="http://globs.org/articles.php?lng=en&amp;amp;pg=2"&gt;Alexandre Feblot's external
editor plugin&lt;/a&gt; to allow
me to edit mail with emacs.  Unfortunately it seems long unmaintained
and stopped working on a recent upgrade to Thunderbird 54 when some
deprecated interfaces were removed.  &lt;a class="reference external" href="https://github.com/bk2204/extedit"&gt;Brian M. Carlson&lt;/a&gt; seemed to have another version
which …&lt;/p&gt;</summary><content type="html">&lt;p&gt;For many years I've used Thunderbird with &lt;a class="reference external" href="http://globs.org/articles.php?lng=en&amp;amp;pg=2"&gt;Alexandre Feblot's external
editor plugin&lt;/a&gt; to allow
me to edit mail with emacs.  Unfortunately it seems long unmaintained
and stopped working on a recent upgrade to Thunderbird 54 when some
deprecated interfaces were removed.  &lt;a class="reference external" href="https://github.com/bk2204/extedit"&gt;Brian M. Carlson&lt;/a&gt; seemed to have another version
which also seemed to fail with latest Thunderbird.&lt;/p&gt;
&lt;p&gt;I have used my meagre Mozilla plugin skills to make an update at
&lt;a class="reference external" href="https://github.com/ianw/extedit/releases"&gt;https://github.com/ianw/extedit/releases&lt;/a&gt;.  Here you can download an
&lt;tt class="docutils literal"&gt;xpi&lt;/tt&gt; that passes the rigorous test-suite of ... works for me.&lt;/p&gt;
</content><category term="linux"></category></entry><entry><title>Zuul and Ansible in OpenStack CI</title><link href="https://www.technovelty.org/openstack/zuul-and-ansible-in-openstack-ci.html" rel="alternate"></link><published>2016-06-21T15:16:00+10:00</published><updated>2016-06-21T15:16:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2016-06-21:/openstack/zuul-and-ansible-in-openstack-ci.html</id><summary type="html">&lt;p&gt;In a &lt;a class="reference external" href="https://www.technovelty.org/openstack/image-building-in-openstack-ci.html"&gt;prior post&lt;/a&gt;,
I gave an overview of the OpenStack CI system and how jobs were
started.  In that I said&lt;/p&gt;
&lt;blockquote&gt;
(It is a gross oversimplification, but for the purposes of OpenStack
CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul
Version 3, under development …&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;In a &lt;a class="reference external" href="https://www.technovelty.org/openstack/image-building-in-openstack-ci.html"&gt;prior post&lt;/a&gt;,
I gave an overview of the OpenStack CI system and how jobs were
started.  In that I said&lt;/p&gt;
&lt;blockquote&gt;
(It is a gross oversimplification, but for the purposes of OpenStack
CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul
Version 3, under development, is working to remove the need for
Jenkins to be involved at all).&lt;/blockquote&gt;
&lt;p&gt;Well some recent security issues with Jenkins and other changes has
led to a roll-out of what is being called Zuul 2.5, which has indeed
removed Jenkins and makes extensive use of Ansible as the basis for
running CI tests in OpenStack.  Since I already had the diagram, it
seems worth updating it for the new reality.&lt;/p&gt;
&lt;div class="section" id="openstack-ci-overview"&gt;
&lt;h2&gt;OpenStack CI Overview&lt;/h2&gt;
&lt;p&gt;While previous post was really focused on the image-building
components of the OpenStack CI system, overview is the same but more
focused on the launchers that run the tests.&lt;/p&gt;
&lt;img alt="Overview of OpenStack CI with Zuul and Ansible" class="img-responsive" src="https://www.technovelty.org/images/openstack-ci-zuulv25.png" /&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;&lt;p class="first"&gt;The process starts when a developer uploads their code to
&lt;tt class="docutils literal"&gt;gerrit&lt;/tt&gt; via the &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;git-review&lt;/span&gt;&lt;/tt&gt; tool.  There is no further action
required on their behalf and the developer simply waits for
results of their jobs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Gerrit provides a JSON-encoded &amp;quot;fire-hose&amp;quot; output of everything
happening to it.  New reviews, votes, updates and more all get sent
out over this pipe.  &lt;a class="reference external" href="http://docs.openstack.org/infra/zuul/"&gt;Zuul&lt;/a&gt; is the overall scheduler
that subscribes itself to this information and is responsible for
managing the CI jobs appropriate for each change.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Zuul has a &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml"&gt;configuration&lt;/a&gt;
that tells it what jobs to run for what projects.  Zuul can do lots
of interesting things, but for the purposes of this discussion we
just consider that it puts the jobs it wants run into &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt;
for a launcher to consume.  &lt;a class="reference external" href="http://gearman.org/"&gt;gearman&lt;/a&gt; is a
job-server; as they explain it &lt;em&gt;&amp;quot;[gearman] provides a generic
application framework to farm out work to other machines or
processes that are better suited to do the work&amp;quot;.&lt;/em&gt; Zuul puts into
&lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; basically a tuple &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;(job-name,&lt;/span&gt; &lt;span class="pre"&gt;node-type)&lt;/span&gt;&lt;/tt&gt; for each
job it wants run, specifying the unique job name to run and what
type of node it should be run on.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;A group of Zuul &lt;a class="reference external" href="http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/launcher/ansiblelaunchserver.py"&gt;launchers&lt;/a&gt;
are subscribed to &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; as workers.  It is these Zuul
launchers that will consume the job requests from the queue and
actually get the tests running.  However, a launcher needs two
things to be able to run a job — a job definition (what to actually
do) and a worker node (somewhere to do it).&lt;/p&gt;
&lt;p&gt;The first part — what to do — is provided by &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs"&gt;job-definitions&lt;/a&gt;
stored in external YAML files.  The Zuul launcher knows how to
process these files (with some help from &lt;a class="reference external" href="http://docs.openstack.org/infra/jenkins-job-builder/"&gt;Jenkins Job Builder&lt;/a&gt;, which
despite the name is not outputting XML files for Jenkins to
consume, but is being used to help parse templates and macros
within the generically defined job definitions).  Each Zuul
launcher gets these definitions pushed to it constantly by Puppet,
thus each launcher knows about all the jobs it can run
automatically.  Of course Zuul also knows about these same job
definitions; this is the &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;job-name&lt;/span&gt;&lt;/tt&gt; part of the tuple we said it
put into &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt;.&lt;/p&gt;
&lt;p&gt;The second part — somewhere to run the test — takes some more
explaining.  To the next point...&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Several cloud companies donate capacity in their clouds for
OpenStack to run CI tests.  Overall, this capacity is managed by a
customized management tool called &lt;a class="reference external" href="http://docs.openstack.org/infra/system-config/nodepool.html"&gt;nodepool&lt;/a&gt;
(you can see the details of this capacity at any given time by
checking the &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml"&gt;nodepool configuration&lt;/a&gt;).
Nodepool watches the &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; queue and sees what requests are
coming out of Zuul.  It looks at &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;node-type&lt;/span&gt;&lt;/tt&gt; of jobs in the queue
(i.e. what platform the job has requested to run on) and decides
what types of nodes need to start and which cloud providers have
capacity to satisfy demand.&lt;/p&gt;
&lt;p&gt;Nodepool will start fresh virtual machines (from images built daily
as described in the prior post), monitor their start-up and, when
they're ready, put a new &amp;quot;assignment job&amp;quot; back into gearman with
the details of the fresh node.  One of the active Zuul launchers
will pick up this assignment job and register the new node to
itself.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;At this point, the Zuul launcher has what it needs to actually get
jobs started.  With an fresh node registered to it and waiting for
something to do, the Zuul launcher can advertise its ability to
consume one of the waiting jobs from the gearman queue.  For
example, if a &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;ubuntu-trusty&lt;/span&gt;&lt;/tt&gt; node is provided to the Zuul
launcher, the launcher can now consume from &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; any job it
knows about that is intended to run on an &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;ubuntu-trusty&lt;/span&gt;&lt;/tt&gt; node
type.  If you're looking at the &lt;a class="reference external" href="http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/launcher/ansiblelaunchserver.py"&gt;launcher code&lt;/a&gt;
this is driven by the &lt;tt class="docutils literal"&gt;NodeWorker&lt;/tt&gt; class — you can see this being
created in response to an assignment via
&lt;tt class="docutils literal"&gt;LaunchServer.assignNode&lt;/tt&gt;.&lt;/p&gt;
&lt;p&gt;To actually run the job — where the &amp;quot;job hits the metal&amp;quot; as it were
— the Zuul launcher will dynamically construct an &lt;a class="reference external" href="http://docs.ansible.com/ansible/playbooks.html"&gt;Ansible playbook&lt;/a&gt; to run.  This
playbook is a concatenation of common setup and teardown operations
along with the actual test scripts the jobs wants to run.  Using
Ansible to run the job means all the flexibility an orchestration
tool provides is now available to the launcher.  For example, there
is a custom &lt;a class="reference external" href="http://git.openstack.org/cgit/openstack-infra/zuul/tree/zuul/ansible/library/zuul_console.py"&gt;console streamer&lt;/a&gt;
library that allows us to live-stream the console output for the
job over a plain TCP connection, and there is the possibility to
use projects like &lt;a class="reference external" href="https://ara.readthedocs.io/en/latest/"&gt;ARA&lt;/a&gt;
for visualisation of CI runs.  In the future, Ansible will allow
for better coordination when running multiple-node testing jobs —
after all, this is what orchestration tools such as Ansible are
made for!  While the Ansible run can be fairly heavyweight
(especially when you're talking about launching thousands of jobs
an hour), the system scales horizontally with more launchers able
to consume more work easily.&lt;/p&gt;
&lt;p&gt;When checking your job results on &lt;a class="reference external" href="logs.openstack.org"&gt;logs.openstack.org&lt;/a&gt; you will
see a &lt;tt class="docutils literal"&gt;_zuul_ansible&lt;/tt&gt; directory now which contains copies of the
inventory, playbooks and other related files that the launcher used
to do the test run.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Eventually, the test will finish.  The Zuul launcher will put the
result back into &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt;, which Zuul will consume (log copying
is interesting but a topic for another day).  The testing node will
be released back to nodepool, which destroys it and starts all over
again — nodes are not reused and also have no sensitive details on
them, as they are essentially publicly accessible.  Zuul will wait
for the results of all jobs for the change and post the result back
to Gerrit; it either gives a positive vote or the dreaded negative
vote if required jobs failed (it also handles merges to git, but
that is also a topic for another day).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Work will continue within OpenStack Infrastructure to further enhance
Zuul; including better support for multi-node jobs and &amp;quot;in-project&amp;quot;
job definitions (similar to the &lt;a class="reference external" href="https://travis-ci.org/"&gt;https://travis-ci.org/&lt;/a&gt; model);
for full details see &lt;a class="reference external" href="https://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html"&gt;the spec&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
</content><category term="openstack"></category></entry><entry><title>Image building in OpenStack CI</title><link href="https://www.technovelty.org/openstack/image-building-in-openstack-ci.html" rel="alternate"></link><published>2016-04-05T14:03:00+10:00</published><updated>2016-04-05T14:03:00+10:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2016-04-05:/openstack/image-building-in-openstack-ci.html</id><summary type="html">&lt;p&gt;Also titled &lt;em&gt;minimal images - maximal effort&lt;/em&gt;!&lt;/p&gt;
&lt;p&gt;The OpenStack Infrastructure Team manages a large
continuous-integration system that provides the broad range of testing
the OpenStack project requires.  Tests are run thousands of times a
day across every project, on multiple platforms and on multiple
cloud-providers.  There are essentially no manual steps …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Also titled &lt;em&gt;minimal images - maximal effort&lt;/em&gt;!&lt;/p&gt;
&lt;p&gt;The OpenStack Infrastructure Team manages a large
continuous-integration system that provides the broad range of testing
the OpenStack project requires.  Tests are run thousands of times a
day across every project, on multiple platforms and on multiple
cloud-providers.  There are essentially no manual steps in any part of
the process, with every component being automated via scripting, a few
home-grown tools and liberal doses of Puppet and Ansible.  More
importantly, every component resides in the public &lt;tt class="docutils literal"&gt;git&lt;/tt&gt; trees right
alongside every other OpenStack project, with contributions actively
encouraged.&lt;/p&gt;
&lt;p&gt;As with any large system, technical debt can build up and start to
affect stability and long-term maintainability.  OpenStack
Infrastructure can see some of this debt accumulating as more testing
environments across more cloud-providers are being added to support
ever-growing testing demands.  Thus a strong focus of recent work has
been consolidating testing platforms to be smaller, better defined and
more maintainable.  This post illustrates some of the background to
the issues and describes how these new platforms are more reliable and
maintainable.&lt;/p&gt;
&lt;div class="section" id="openstack-ci-overview"&gt;
&lt;h2&gt;OpenStack CI Overview&lt;/h2&gt;
&lt;p&gt;Before getting into details, it's a good idea to get a basic
big-picture conceptual model of how OpenStack CI testing works.  If
you look at the following diagram and follow the numbers with the
explanation below, hopefully you'll have all the context you need.&lt;/p&gt;
&lt;img alt="Overview of OpenStack CI" class="img-responsive" src="/images/openstack-ci.png" /&gt;
&lt;ol class="arabic"&gt;
&lt;li&gt;&lt;p class="first"&gt;The developer uploads their code to &lt;tt class="docutils literal"&gt;gerrit&lt;/tt&gt; via the
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;git-review&lt;/span&gt;&lt;/tt&gt; tool.  There is no further action required on their
behalf and the developer simply waits for results.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Gerrit provides a JSON-encoded &amp;quot;firehose&amp;quot; output of everything
happening to it.  New reviews, votes, updates and more all get sent
out over this pipe.  &lt;a class="reference external" href="http://docs.openstack.org/infra/zuul/"&gt;Zuul&lt;/a&gt; is the overall scheduler
that subscribes itself to this information and is responsible for
managing the CI jobs appropriate for each change.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Zuul has a &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml"&gt;configuration&lt;/a&gt;
that tells it what jobs to run for what projects.  Zuul can do lots
of interesting things, but for the purposes of this discussion we
just consider that it puts the jobs it wants run into &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt;
for a Jenkins master to consume.  &lt;a class="reference external" href="http://gearman.org/"&gt;gearman&lt;/a&gt;
is a job-server; as they explain it &lt;em&gt;&amp;quot;[gearman] provides a generic
application framework to farm out work to other machines or
processes that are better suited to do the work&amp;quot;.&lt;/em&gt; Zuul puts into
&lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; basically a tuple &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;(job-name,&lt;/span&gt; &lt;span class="pre"&gt;node-type)&lt;/span&gt;&lt;/tt&gt; for each
job it wants run, specifying the unique job name to run and what
type of node it should be run on.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;A group of &lt;a class="reference external" href="https://jenkins.io/"&gt;Jenkins&lt;/a&gt; masters are subscribed
to &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; as workers.  It is these Jenkins masters that will
consume the job requests from the queue and actually get the tests
running.  However, Jenkins needs two things to be able to run a job
— a job definition (what to actually do) and a slave node
(somewhere to do it).&lt;/p&gt;
&lt;p&gt;The first part — what to do — is provided by &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs"&gt;job-definitions&lt;/a&gt;
stored in external YAML files and processed by &lt;a class="reference external" href="http://docs.openstack.org/infra/jenkins-job-builder/"&gt;Jenkins Job Builder&lt;/a&gt; (&lt;tt class="docutils literal"&gt;jjb&lt;/tt&gt;)
in to job configurations for Jenkins.  Each Jenkins master gets
these definitions pushed to it constantly by Puppet, thus each
Jenkins master instance knows about all the jobs it can run
automatically.  Zuul also knows about these job definitions; this
is the &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;job-name&lt;/span&gt;&lt;/tt&gt; part of the tuple we said it put into
&lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt;.&lt;/p&gt;
&lt;p&gt;The second part — somewhere to run the test — takes some more
explaining.  To the next point...&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Several cloud companies donate capacity in their clouds for
OpenStack to run CI tests.  Overall, this capacity is managed by a
customised orchestration tool called &lt;a class="reference external" href="http://docs.openstack.org/infra/system-config/nodepool.html"&gt;nodepool&lt;/a&gt;.
Nodepool watches the &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; queue and sees what requests are
coming out of Zuul.  It looks at &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;node-type&lt;/span&gt;&lt;/tt&gt; of jobs in the queue
and decides what types of nodes need to start and which cloud
providers have capacity to satisfy demand.  Nodepool will monitor
the start-up of the virtual-machines and register the new nodes to
the Jenkins master instances.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;At this point, the Jenkins master has what it needs to actually get
jobs started.  When nodepool registers a host to a Jenkins master
as a slave, the Jenkins master can now advertise its ability to
consume jobs.  For example, if a &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;ubuntu-trusty&lt;/span&gt;&lt;/tt&gt; node is provided
to the Jenkins master instance by nodepool, Jenkins can now consume
from &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt; any job it knows about that is intended to run on
an &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;ubuntu-trusty&lt;/span&gt;&lt;/tt&gt; slave.  Jekins will run the job as defined in
the job-definition on that host — ssh-ing in, running scripts,
copying the logs and waiting for the result.  (It is a gross
oversimplification, but for the purposes of OpenStack CI, Jenkins
is pretty much used as a glorified ssh/scp wrapper.  Zuul Version
3, under development, is working to remove the need for Jenkins to
be involved at all.  &lt;strong&gt;2016-06&lt;/strong&gt; Jenkins has been removed from the
OpenStack CI pipeline and largely replaced with Ansible.  For
details see &lt;a class="reference external" href="https://www.technovelty.org/openstack/zuul-and-ansible-in-openstack-ci.html"&gt;this post&lt;/a&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p class="first"&gt;Eventually, the test will finish.  The Jenkins master will put the
result back into &lt;tt class="docutils literal"&gt;gearman&lt;/tt&gt;, which Zuul will consume.  The slave
will be released back to nodepool, which destroys it and starts all
over again (slaves are not reused and also have no sensitive
details on them, as they are essentially publicly accessible).
Zuul will wait for the results of all jobs for the change and post
the result back to Gerrit; it either gives a positive vote or the
dreaded negative vote if required jobs failed (it also handles
merges to git, but we'll ignore that bit for now).&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In a nutshell, that is the CI work-flow that happens
thousands-upon-thousands of times a day keeping OpenStack humming
along.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="image-builds"&gt;
&lt;h2&gt;Image builds&lt;/h2&gt;
&lt;p&gt;So far we have glossed over how nodepool actually creates the images
that it hands out for testing.  Image creation, illustrated in step
&lt;em&gt;8&lt;/em&gt; above, contains a lot of important details.&lt;/p&gt;
&lt;p&gt;Firstly, what are these images and why build them at all?  These
images are where the &amp;quot;rubber hits the road&amp;quot; — they are instantiated
into the virtual-machines that will run DevStack, unit-testing or
whatever else someone might want to test.&lt;/p&gt;
&lt;p&gt;The main goal is to provide a stable and consistent environment in
which to run a wide-range of tests.  A full OpenStack deployment
results in hundreds of libraries and millions of lines of code all
being exercised at once.  The testing-images are right at the bottom
of all this, so any instability or inconsistency affects everyone;
leading to constant fire-firefighting and major inconvenience as all
forward-progress stops when CI fails.  We want to support a wide
number of platforms interesting to developers such as Ubuntu, Debian,
CentOS and Fedora, and we also want to and make it easy to handle new
releases and add other platforms.  We want to ensure this can be
maintained without too much day-to-day hands-on.&lt;/p&gt;
&lt;p&gt;Caching is a big part of the role of these images.  With thousands of
jobs going on every day, an occasional network blip is not a minor
annoyance, but creates constant and difficult to debug failures.  We
want jobs to rely on as few external resources as possible so tests
are consistent and stable.  This means caching things like the git
trees tests might use (OpenStack just broke the 1000 repository mark),
VM images, packages and other common bits and pieces.  Obviously a
cache is only as useful as the data in it, so we build these images up
every day to keep them fresh.&lt;/p&gt;
&lt;div class="section" id="snapshot-images"&gt;
&lt;h3&gt;Snapshot images&lt;/h3&gt;
&lt;p&gt;If you log into almost any cloud-provider's interface, they almost
certainly have a range of pre-canned images of common distributions
for you to use.  At first, the base images for OpenStack CI testing
came from what the cloud-providers had as their public image types.
However, over time, there are a number of issues that emerge:&lt;/p&gt;
&lt;ol class="arabic simple"&gt;
&lt;li&gt;No two images, even for the same distribution or platform, are the
same.  Every provider seems to do something &amp;quot;helpful&amp;quot; to the images
which requires some sort of workaround.&lt;/li&gt;
&lt;li&gt;Providers rarely leave these images alone.  One day you would boot
the image to find a bunch of Python libraries pip-installed, or a
mount-point moved, or base packages removed (all happened).&lt;/li&gt;
&lt;li&gt;Even if the changes &lt;em&gt;are&lt;/em&gt; helpful, it does not make for consistent
and reproducible testing if every time you run, you're on a
slightly different base system.&lt;/li&gt;
&lt;li&gt;Providers don't have some images you want (like a latest Fedora),
or have different versions, or different point releases.  All
update asynchronously whenever they get around to it.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So the original incarnations of OpenStack CI images were based on
these public images.  Nodepool would start one of these provider
images and then run a series of scripts on it — these scripts would
firstly try to work-around any quirks to make the images look as
similar as possible across providers, and then do the caching, setup
things like authorized keys and finish other configuration tasks.
Nodepool would then snapshot this prepared image and start
instantiating VM's from these snapshots into the pool for testing.  If
you hear someone talking about a &amp;quot;snapshot image&amp;quot; in OpenStack CI
context, that's likely what they are referring to.&lt;/p&gt;
&lt;p&gt;Apart from the stability of the underlying images, the other issue you
hit with this approach is that the number of images being built starts
to explode when you take into account multiple providers and multiple
regions.  Even with just Rackspace and the (now defunct) HP Cloud we
would end up creating snapshot images for 4 or 5 platforms across a
total of about 8 regions — meaning anywhere up to 40 separate image
builds happening daily (you can see how ridiculous it was getting in
the &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/system-config/commit/modules/openstack_project/templates/nodepool/nodepool.logging.conf.erb?id=3bafd2c691cb5a49d821b77863d9200afc9c7312"&gt;logging configuration&lt;/a&gt;
used at the time).  It was almost a &lt;em&gt;fait accompli&lt;/em&gt; that some of these
would fail every day — nodepool can deal with this by reusing old
snapshots — but this leads to a inconsistent and heterogeneous testing
environment.&lt;/p&gt;
&lt;p&gt;Naturally there was a desire for something more consistent — a single
image that could run across multiple providers in a much more tightly
controlled manner.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class="section" id="upstream-based-builds"&gt;
&lt;h2&gt;Upstream-based builds&lt;/h2&gt;
&lt;p&gt;Upstream distributions do provide &amp;quot;cloud-images&amp;quot;, which are usually
pre-canned &lt;tt class="docutils literal"&gt;.qcow2&lt;/tt&gt; format files suitable for uploading to your
average cloud.  So the &lt;a class="reference external" href="http://docs.openstack.org/developer/diskimage-builder/"&gt;diskimage-builder&lt;/a&gt; tool was
put into use creating images for nodepool, based on these
upstream-provided images.  In essence, &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;diskimage-builder&lt;/span&gt;&lt;/tt&gt; uses a
series of elements (each, as the name suggests, designed to do one
thing) that allow you to build a completely customised image.  It
handles all the messy bits of laying out the image file, tries to be
smart about caching large downloads and final things like conversion
to &lt;tt class="docutils literal"&gt;qcow2&lt;/tt&gt; or &lt;tt class="docutils literal"&gt;vhd&lt;/tt&gt;.&lt;/p&gt;
&lt;p&gt;nodepool has used &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;diskimage-builder&lt;/span&gt;&lt;/tt&gt; to create customised images
based upon the upstream releases for some time.  These are better, but
still have some issues for the CI environment:&lt;/p&gt;
&lt;ol class="arabic simple"&gt;
&lt;li&gt;You still really have no control over what does or does not go into
the upstream base images.  You don't notice a change until you
deploy a new image based on an updated version and things break.&lt;/li&gt;
&lt;li&gt;The images still start with a fair amount of &amp;quot;stuff&amp;quot; on them.  For
example &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;cloud-init&lt;/span&gt;&lt;/tt&gt; is a rather large Python program and has a
fair few dependencies.  These dependencies can both conflict with
parts of OpenStack or end up tacitly hiding real test requirements
(the test doesn't specify it, but the package is there as part of
another base dependency.  Things then break when the base
dependencies change).  The whole idea of the CI is that (as much as
possible) you're not making any assumptions about what is required
to run your tests — you want everything explicitly included.&lt;/li&gt;
&lt;li&gt;An image that &amp;quot;works everywhere&amp;quot; across multiple cloud-providers is
quite a chore.  &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;cloud-init&lt;/span&gt;&lt;/tt&gt; hasn't always had support for
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;config-drive&lt;/span&gt;&lt;/tt&gt; and Rackspace's DHCP-less environment, for
example.  Providers all have their various different networking
schemes or configuration methods which needs to be handled
consistently.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you were starting this whole thing again, things like LXC/Docker to
keep &amp;quot;systems within systems&amp;quot; might come into play and help alleviate
some of the packaging conflicts.  Indeed they may play a role in the
future.  But don't forget that DevStack, the major CI deployment
mechanism, was started before Docker existed.  And there's tricky
stuff with networking and Neutron going on.  And things like iSCSI
kernel drivers that containers don't support well.  And you need to
support Ubuntu, Debian, CentOS and Fedora.  And you have hundreds of
developers already relying on what's there.  So change happens
incrementally, and in the mean time, there is a clear need for a
stable, consistent environment.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="minimal-builds"&gt;
&lt;h2&gt;Minimal builds&lt;/h2&gt;
&lt;p&gt;To this end, &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;diskimage-builder&lt;/span&gt;&lt;/tt&gt; now has a serial of &amp;quot;minimal&amp;quot;
builds that are really that — systems with essentially nothing on
them.  For Debian and Ubuntu this is achieved via &lt;tt class="docutils literal"&gt;debootstrap&lt;/tt&gt;, for
Fedora and CentOS we replicate this with manual installs of base
packages into a clean &lt;tt class="docutils literal"&gt;chroot&lt;/tt&gt; environment.  We add on a range of
important elements that make the image useful; for example, for
networking, we have &lt;a class="reference external" href="http://docs.openstack.org/developer/diskimage-builder/elements/simple-init/README.html"&gt;simple-init&lt;/a&gt;
which brings up the network consistently across all our providers but
has no dependencies to mess with the base system.  If you check the
&lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements"&gt;elements&lt;/a&gt;
provided by &lt;cite&gt;project-config&lt;/cite&gt; you can see a range of specific elements
that OpenStack Infra runs at each image build (these are actually
specified by in arguments to nodepool, see the &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/nodepool.yaml"&gt;config file&lt;/a&gt;,
particularly &lt;tt class="docutils literal"&gt;diskimages&lt;/tt&gt; section).  These custom elements do things
like caching, using puppet to install the right &lt;tt class="docutils literal"&gt;authorized_keys&lt;/tt&gt;
files and setup a few needed things to connect to the host.  In
general, you can see the logs of an image build provided by &lt;a class="reference external" href="http://nodepool.openstack.org/"&gt;nodepool&lt;/a&gt; for each daily build.&lt;/p&gt;
&lt;p&gt;So now, each day at 14:14 UTC nodepool builds the daily images that
will be used for CI testing.  We have &lt;em&gt;one&lt;/em&gt; image of each type that
(theoretically) works across all our providers.  After it finishes
building, nodepool uploads the image to all providers (p.s. the
process of doing this is so insanely terrible it spawned &lt;a class="reference external" href="http://docs.openstack.org/infra/shade/"&gt;shade&lt;/a&gt;; this deserves many posts
of its own) at which point it will start being used for CI jobs.  If
you wish to replicate this entire process, the &lt;a class="reference external" href="https://github.com/openstack-infra/project-config/blob/master/tools/build-image.sh"&gt;build-image.sh&lt;/a&gt;
script, run on an Ubuntu Trusty host in a virtualenv with
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;diskimage-builder&lt;/span&gt;&lt;/tt&gt; will get you pretty close (let us know of any
issues!).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="devstack-and-bare-nodes"&gt;
&lt;h2&gt;DevStack and bare nodes&lt;/h2&gt;
&lt;p&gt;There are two major ways OpenStack projects test their changes:&lt;/p&gt;
&lt;ol class="arabic simple"&gt;
&lt;li&gt;Running with &lt;a class="reference external" href="http://docs.openstack.org/developer/devstack/"&gt;DevStack&lt;/a&gt;, which brings up
a small, but fully-functional, OpenStack cloud with the
change-under-test applied.  Generally &lt;a class="reference external" href="http://docs.openstack.org/a/tempest/"&gt;tempest&lt;/a&gt; is then used to ensure
the big-picture things like creating VM's, networks and storage are
all working.&lt;/li&gt;
&lt;li&gt;Unit-testing within the project; i.e. what you do when you type
&lt;tt class="docutils literal"&gt;tox &lt;span class="pre"&gt;-e&lt;/span&gt; py27&lt;/tt&gt; in basically any OpenStack project.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To support this testing, OpenStack CI ended up with the concept of
&lt;em&gt;bare&lt;/em&gt; nodes and &lt;em&gt;devstack&lt;/em&gt; nodes.&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;A &lt;em&gt;bare&lt;/em&gt; node was made for unit-testing.  While &lt;tt class="docutils literal"&gt;tox&lt;/tt&gt; has plenty
of information about installing required Python packages into the
&lt;tt class="docutils literal"&gt;virtualenv&lt;/tt&gt; for testing, it doesn't know anything about the
&lt;em&gt;system packages&lt;/em&gt; required to build those Python packages.  This
means things like &lt;tt class="docutils literal"&gt;gcc&lt;/tt&gt; and library &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;-devel&lt;/span&gt;&lt;/tt&gt; packages which many
Python packages use to build bindings.  Thus the &lt;em&gt;bare&lt;/em&gt; nodes had an
ever-growing and not well-defined list of packages that were
pre-installed during the image-build to support unit-testing.  Worse
still, projects didn't &lt;em&gt;really&lt;/em&gt; know their dependencies but just
relied on their testing working with this global list that was
pre-installed on the image.&lt;/li&gt;
&lt;li&gt;In contrast to this, DevStack has always been able to bootstrap
itself from a blank system to a working OpenStack deployment by
ensuring it has the right dependencies installed.  We don't want any
packages pre-installed here because it hides actual dependencies
that we want explicitly defined within DevStack — otherwise, when a
user goes to deploy DevStack for their development work, things
break because their environment differs slightly to the CI one.  If
you look at all the job definitions in OpenStack, by convention any
job running DevStack has a &lt;tt class="docutils literal"&gt;dsvm&lt;/tt&gt; in the job name — this referred
to running on a &amp;quot;DevStack Virtual Machine&amp;quot; or a &lt;em&gt;devstack&lt;/em&gt; node.  As
the CI environment has grown, we have more and more testing that
isn't DevStack based (puppet apply tests, for example) that rather
confusingly want to run on a &lt;em&gt;devstack&lt;/em&gt; node because they do not
want dependencies installed.  While it's just a name, it can be
difficult to explain!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thus we ended up maintaining two node-types, where the difference
between them is what was pre-installed on the host — and yes, the
&lt;em&gt;bare&lt;/em&gt; node had &lt;em&gt;more&lt;/em&gt; installed than a &lt;em&gt;devstack&lt;/em&gt; node, so it wasn't
that bare at all!&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="specifying-dependencies"&gt;
&lt;h2&gt;Specifying Dependencies&lt;/h2&gt;
&lt;p&gt;Clearly it is useful to unify these node types, but we still need to
provide a way for the unit-test environments to have their
dependencies installed.  This is where a tool called &lt;a class="reference external" href="http://docs.openstack.org/infra/bindep/"&gt;bindep&lt;/a&gt; comes in.  This tool
gives project authors a way to specify their system requirements in a
similar manner to the way their Python requirements are kept.  For
example, OpenStack has the concept of &lt;em&gt;global requirements&lt;/em&gt; — those
Python dependencies that are common across all projects so version
skew becomes somewhat manageable.  This project now has some extra
information in the &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack/requirements/tree/other-requirements.txt"&gt;other-requirements.txt&lt;/a&gt;
file, which lists the system packages required to build the Python
packages in the global-requirements list.&lt;/p&gt;
&lt;p&gt;&lt;tt class="docutils literal"&gt;bindep&lt;/tt&gt; knows how to look at these lists provided by projects and
get the right packages for the platform it is running on.  As part of
the image-build, we have a &lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/cache-bindep"&gt;cache-bindep&lt;/a&gt;
element that can go through every project and build a list of the
packages it requires.  We can thus pre-cache all of these packages
onto the images, knowing that they are required by jobs.  This both
reduces the dependency on external mirrors and improves job
performance (as the packages are locally cached) but doesn't pollute
the system by having everything pre-installed.&lt;/p&gt;
&lt;p&gt;Package installation can now happen via the way we really &lt;em&gt;should&lt;/em&gt; be
doing it — as part of the CI job.  There is a job-macro called
&lt;a class="reference external" href="https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/macros.yaml"&gt;install-distro-packages&lt;/a&gt;
which a test can use to call &lt;tt class="docutils literal"&gt;bindep&lt;/tt&gt; to install the packages
specified by the project before the run.  You might notice the script
has a &amp;quot;fallback&amp;quot; list of packages if the project does not specify it's
own dependencies — this essentially replicates the environment of a
&lt;em&gt;bare&lt;/em&gt; node as we transition to projects more strictly specifying
their system requirements.&lt;/p&gt;
&lt;p&gt;We can now start with a blank image and all the dependencies to run
the job can be expressed &lt;em&gt;by&lt;/em&gt; and &lt;em&gt;within&lt;/em&gt; the project — leading to a
consistent and reproducible environment without any hidden
dependencies.  Several things have broken as part of removing &lt;em&gt;bare&lt;/em&gt;
nodes — this is actually a &lt;em&gt;good&lt;/em&gt; thing because it means we have
revealed areas where we were making assumptions in jobs about what the
underlying platform provides.  There's a few other job-macros that can
do things like provide MySQL/Postgres instances for testing or setup
other common job requirements.  By splitting these types of things out
from base-images we also improve the performance of jobs who don't
waste time doing things like setting up databases for jobs that don't
need it.&lt;/p&gt;
&lt;p&gt;As of this writing, the &lt;tt class="docutils literal"&gt;bindep&lt;/tt&gt; work is new and still a
work-in-progress.  But the end result is that we have no more need for
a separate &lt;em&gt;bare&lt;/em&gt; node type to run unit-tests.  This essentially
halves the number of image-builds required and brings us to the goal
of a single image for each platform running all CI.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="conclusion"&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;While dealing with multiple providers, image-types and dependency
chains has been a great effort for the infra team, to everyone's
credit I don't think the project has really noticed much going on
underneath.&lt;/p&gt;
&lt;p&gt;OpenStack CI has transitioned to a situation where there is a &lt;em&gt;single&lt;/em&gt;
image type for each platform we test that deploys unmodified across
all our providers and runs all testing environments equally.  We have
better insight into our dependencies and better tools to manage them.
This leads to greatly decreased maintenance burden, better consistency
and better performance; all great things to bring to OpenStack CI!&lt;/p&gt;
&lt;/div&gt;
</content><category term="openstack"></category></entry><entry><title>Durable photo workflow</title><link href="https://www.technovelty.org/junkcode/durable-photo-workflow.html" rel="alternate"></link><published>2016-03-30T15:13:00+11:00</published><updated>2016-03-30T15:13:00+11:00</updated><author><name>Ian Wienand</name></author><id>tag:www.technovelty.org,2016-03-30:/junkcode/durable-photo-workflow.html</id><summary type="html">&lt;p&gt;Ever since my kids were born I have accumulated thousands of digital
happy-snaps and I have finally gotten to a point where I'm quite happy
with my work-flow.  I have always been extremely dubious of using any
sort of external all-in-one solution to managing my photos; so many
things seem …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Ever since my kids were born I have accumulated thousands of digital
happy-snaps and I have finally gotten to a point where I'm quite happy
with my work-flow.  I have always been extremely dubious of using any
sort of external all-in-one solution to managing my photos; so many
things seem to shut-down, cease development or disappear, all leaving
you to have to figure out how to migrate to the next latest thing
(e.g.  &lt;a class="reference external" href="http://googlephotos.blogspot.com/"&gt;Picasa shutting down&lt;/a&gt;).
So while there is nothing complicated or even generic about them,
there are a few things in my &lt;a class="reference external" href="https://github.com/ianw/photo-scripts"&gt;photo-scripts&lt;/a&gt; repo that might help others
who like to keep a self-contained archive.&lt;/p&gt;
&lt;p&gt;Firstly I have a &lt;a class="reference external" href="https://github.com/ianw/photo-scripts/blob/master/getlatestimages.sh"&gt;simple script&lt;/a&gt;
to copy the latest photos from the SD card (i.e. those new since the
last copy -- this is obviously very camera specific).  I then &lt;a class="reference external" href="https://github.com/ianw/photo-scripts/blob/master/move-by-date.py"&gt;split
by date&lt;/a&gt;
so I have a simple flat directory layout with each week's photos in
it.  With the price of SD cards and my rate of filling them up, I
don't even bother wiping them at this point, but just keep them in the
safe as a backup.&lt;/p&gt;
&lt;p&gt;For some reason I have a bit of a thing about geotagging all the
photos so I know where I took them.  Certainly some cameras do this
today, but mine does not.  I have a two-progned approach; I have a
&lt;a class="reference external" href="https://github.com/ianw/photo-scripts/blob/master/geotag.sh"&gt;geotag script&lt;/a&gt; and
then a small website &lt;a class="reference external" href="http://www.easygeotag.info"&gt;easygeotag.info&lt;/a&gt;
which quickly lets met translate a point on Google maps to &lt;tt class="docutils literal"&gt;exiv2&lt;/tt&gt;
command-line syntax.  Since I take a lot of photos in the same place,
the script can store points by name in a small file sourced by the
script.&lt;/p&gt;
&lt;p&gt;Adding comments to the photos is done with perhaps the lesser-known
cousin of EXIF -- IPTC.  Some time ago I wrote python bindings for
&lt;a class="reference external" href="http://libiptcdata.sourceforge.net"&gt;libiptcdata&lt;/a&gt; and it has been
working just fine ever since.  Debian's &lt;a class="reference external" href="https://packages.debian.org/search?keywords=python-iptcdata"&gt;python-iptcdata&lt;/a&gt; comes
with a inbuilt script to set title and caption, which is easily
&lt;a class="reference external" href="https://github.com/ianw/photo-scripts/blob/master/tag-photos.sh"&gt;wrapped&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What I like about this is that my photos are in a simple directory
layout, with all metadata embedded within the actual image files in
very standarised formats that should be readable by anywhere I choose
to host them.&lt;/p&gt;
&lt;p&gt;For sharing, I then upload to Flickr.  I used to have a command-line
script for this, but have found the web uploader works even better
these days.  It reads the IPTC data for titles and comments, and gets
the geotag info for nice map displays.  I manually coralle them into
albums, and the Flickr &amp;quot;guest pass&amp;quot; is perfect for then sharing albums
to friends and family without making them jump through hoops to
register on a site to get access to the photos, or worse, host them
myself.  I consider Flickr a cache, because (even though I pay) I
expect it to shut-down or turn evil at any time.  Interestingly, their
AI tagging is often quite accurate, and I imagine will only get
better.  This is nice extra metadata that you don't have to spend time
on yourself.&lt;/p&gt;
&lt;p&gt;The last piece has always been the &amp;quot;hit by a bus&amp;quot; component of all
this.  Can anyone figure out access to all these photos if I suddenly
disappear?  I've tried many things here -- at one point I was using
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;rdiff-backup&lt;/span&gt;&lt;/tt&gt; to sync encrypted bundles up to AWS for example; but
I very clearly found the problem in that when &lt;em&gt;I&lt;/em&gt; forgot the keep the
key safe and couldn't unencrypt any of my backups (let alone anyone
else figuring all this out).&lt;/p&gt;
&lt;p&gt;Finally &lt;a class="reference external" href="https://cloud.google.com/storage/docs/nearline"&gt;Google Nearline&lt;/a&gt; seems to be just
what I want.  It's off-site, redundant and the price is right; but
more importantly I can very easily give access to the backup bucket to
anyone with a Google address, who can then just hit a website to
download the originals from the bucket (I left the link with my other
&amp;quot;hit by a bus&amp;quot; bits and pieces).  Of course what they then do with
this data is their problem, but at least I feel like they have a
chance.  This even has an &lt;tt class="docutils literal"&gt;rsync&lt;/tt&gt; like interface in the client, so I
can quickly upload the new stuff from my home NAS (where I keep the
photos in a RAID0).&lt;/p&gt;
&lt;p&gt;I've been doing this now for 350 weeks and have worked through some
25,000 photos.  I used to get an album up every week, but as the kids
get older and we're closer to family I now do it in batches about once
a month.  I do wonder if my kids will ever be interested in tagged and
commented photos with pretty much their exact location from their
childhood ... I doubt it, but it's nice to feel like I have a good
chance of still having them if they do.&lt;/p&gt;
</content><category term="junkcode"></category></entry></feed>