<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[👨‍💻 Funky Penguin]]></title><description><![CDATA[Levelling up your geek-fu]]></description><link>https://www.funkypenguin.co.nz/</link><generator>Ghost 3.39</generator><lastBuildDate>Wed, 20 Jan 2021 02:58:18 GMT</lastBuildDate><atom:link href="https://www.funkypenguin.co.nz/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How to transfer data between Kubernetes Persistent Volumes (quick and dirty hack)]]></title><link>https://www.funkypenguin.co.nz/blog/migrate-data-between-kubernetes-persistent-volumes/</link><guid isPermaLink="false">60010a0335004a00ae012cf2</guid><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Fri, 15 Jan 2021 03:41:56 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1574127157804-25a46105bde1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDd8fGNsb25lfGVufDB8fHw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded/></item><item><title><![CDATA[Giving KeyCloak users Admin privileges in Grafana with OIDC]]></title><link>https://www.funkypenguin.co.nz/blog/giving-keycloak-users-admin-privileges-in-grafana-via-oidc/</link><guid isPermaLink="false">5ffe564535004a00ae012ca9</guid><category><![CDATA[grafana]]></category><category><![CDATA[keycloak]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Wed, 13 Jan 2021 02:35:51 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1566140967404-b8b3932483f5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDJ8fGxlZ298ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded/></item><item><title><![CDATA[JAJC Manual]]></title><description><![CDATA[<p>In preparing an author bio (harder than it sounds!) for a PHPList book I've been authoring, I dug up this old copy of a <a href="http://jajc.jrudevels.org">JAJC</a> manual (<em>a crusty ol' win32 jabber client</em>)  I authored in 2003, using Docbook XML.</p><p>It's probably no longer valid, but <a href="https://static.funkypenguin.co.nz/2003/jajc_manual.pdf">here</a> it is anyway, for</p>]]></description><link>https://www.funkypenguin.co.nz/blog/jajc-manual/</link><guid isPermaLink="false">5ff58b914d92d6013e3032f3</guid><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Wed, 06 Jan 2021 10:10:16 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1584844115436-473887b1e327?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDEzfHxkaW5vc2F1cnxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1584844115436-473887b1e327?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDEzfHxkaW5vc2F1cnxlbnwwfHx8&ixlib=rb-1.2.1&q=80&w=2000" alt="JAJC Manual"><p>In preparing an author bio (harder than it sounds!) for a PHPList book I've been authoring, I dug up this old copy of a <a href="http://jajc.jrudevels.org">JAJC</a> manual (<em>a crusty ol' win32 jabber client</em>)  I authored in 2003, using Docbook XML.</p><p>It's probably no longer valid, but <a href="https://static.funkypenguin.co.nz/2003/jajc_manual.pdf">here</a> it is anyway, for posterity :) </p><p>Sadly, the docbook XML source is lost in time - it would have been fun to compare it to the sexiness that is <a href="http://daringfireball.net/projects/markdown/">markdown</a> and <a href="http://www.leanpub.com">leanpub</a> today!</p>]]></content:encoded></item><item><title><![CDATA[Applying PSPs to istio-cni-node pods]]></title><description><![CDATA[<p>I work on Kubernetes cluster which are a whole lot more locked-down than typical installation instructions / application defaults would suggest. In one such cluster, we use PodSecurityPolicies to apply a minimal set of privileges to each pod, and make exceptions on a case-by-case basis.</p><p>On the same cluster, we use</p>]]></description><link>https://www.funkypenguin.co.nz/blog/applying-psp-to-istio-cni-node-pods/</link><guid isPermaLink="false">5fd3e49a4d92d6013e3032bc</guid><category><![CDATA[kubernetes]]></category><category><![CDATA[istio]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Fri, 11 Dec 2020 21:40:33 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1508345228704-935cc84bf5e2?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDN8fGxvY2t8ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1508345228704-935cc84bf5e2?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDN8fGxvY2t8ZW58MHx8fA&ixlib=rb-1.2.1&q=80&w=2000" alt="Applying PSPs to istio-cni-node pods"><p>I work on Kubernetes cluster which are a whole lot more locked-down than typical installation instructions / application defaults would suggest. In one such cluster, we use PodSecurityPolicies to apply a minimal set of privileges to each pod, and make exceptions on a case-by-case basis.</p><p>On the same cluster, we use the Istio service mesh to secure traffic between our pods using mutualTLS. We take advantage of Istio's CNI plugin to allow the Istio sidecar to inject the "traffic interception" rules when pods start up, without requiring privileged access for every pod with a sidecar.</p><p>The CNI plugin creates a daemonset (<em>a pod per node</em>), which requires privileged access to inject the interception rules. Our default, restrictive PSP policy prevents these istio-cni-node pods from ever starting though, as illustrated below:</p><pre><code>  Type     Reason        Age                    From                  Message
  ----     ------        ----                   ----                  -------
  Warning  FailedCreate  114s (x17 over 7m22s)  daemonset-controller  Error creating: pods "istio-cni-node-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0].hostPath.pathPrefix: Invalid value: "/opt/cni/bin": is not allowed to be used spec.volumes[1].hostPath.pathPrefix: Invalid value: "/etc/cni/net.d": is not allowed to be used]</code></pre><p>The error above is pointing out that PSPs (<em>quite rightly</em>) prevented an arbitrary pod from mounting critical host directories, and having its way with them.</p><p>In this case, access to <code>/opt/cni/bin</code> and <code>/etc/cni/net.d</code> is a requirement for using Istio CNI (<em>and the alternative of allowing <strong>every</strong> pod privileged access is <strong>much</strong> worse!</em>), so we deploy a PSP, ClusterRole, and ClusterRoleBinding as illustrated below (<em>you can grab a copy <a href="https://gist.github.com/funkypenguin/e8ed46c118a4b77af066400fa8c88f28">here</a></em>):</p><!--kg-card-begin: html--><script src="https://gist.github.com/funkypenguin/e8ed46c118a4b77af066400fa8c88f28.js"></script><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Pin a Kubernetes pod to the current node to avoid (hostPath) data loss]]></title><description><![CDATA[<h2 id="tl-dr">TL;DR </h2><p>here's a handy one-liner to pin a running pod to the node it's currently on:</p><pre><code>kubectl patch deployment -n $NAMESPACE $DEPLOYMENT -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n $NAMESPACE -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current</code></pre>]]></description><link>https://www.funkypenguin.co.nz/blog/pin-a-kubernetes-pod-to-the-current-node/</link><guid isPermaLink="false">5fd083674d92d6013e303208</guid><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Wed, 09 Dec 2020 08:55:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1528578577235-b963df6db908?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDE5fHxwaW58ZW58MHx8fA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="tl-dr">TL;DR </h2><img src="https://images.unsplash.com/photo-1528578577235-b963df6db908?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDE5fHxwaW58ZW58MHx8fA&ixlib=rb-1.2.1&q=80&w=2000" alt="Pin a Kubernetes pod to the current node to avoid (hostPath) data loss"><p>here's a handy one-liner to pin a running pod to the node it's currently on:</p><pre><code>kubectl patch deployment -n $NAMESPACE $DEPLOYMENT -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n $NAMESPACE -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of $DEPLOYMENT pod; exit 1)</code></pre><hr><h2 id="the-long-version">The long version</h2><p>I've been supporting with the <a href="https://portainer.io">Portainer</a> team with a <a href="https://github.com/portainer/k8s">helm chart</a> for their new v2, Kubernetes-supporting version. Recently the boss told me:</p><p>"<em>Sometimes, when using one of these small/development, multi-node Kubernetes clusters like k3s or microk8s, Kubernetes will schedule the pod to a particular node, but when the pod moves to a different node, the data is lost. Find a way to ensure that the pod always remains on the same node</em>"!</p><p>"<em>Nonsense</em>", I replied. "<em>The Kubernetes storage provisioner will be smart enough to ensure that an allocated PV doesn't just move to a different node</em>". And to prove how smart I was, I illustrated by creating a multi-node KinD cluster:</p><pre><code>❯ cat kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

❯ kind create cluster --config kind.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.19.1) 🖼
 ✓ Preparing nodes 📦 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊</code></pre><p>I created the namespace, added the helm repo, and deployed the chart:</p><pre><code>
❯ kubectl create namespace portainer
namespace/portainer created
❯ helm repo add portainer https://portainer.github.io/k8s/
❯ helm repo update
❯ helm upgrade --install -n portainer portainer portainer/portainer
Release "portainer" does not exist. Installing it now.
NAME: portainer
LAST DEPLOYED: Wed Dec  9 21:08:09 2020
NAMESPACE: portainer
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[0].nodePort}" services portainer)
  export NODE_IP=$(kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT</code></pre><p>I examined the PV created by the deployment and saw, as expected, a nodeSelector:</p><pre><code>&gt; kubectl get pv -o yaml
&lt;snip&gt;
nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - kind-worker</code></pre><p>"<em>Boom!</em>", I said. "<em>There's no problem, because Kubernetes won't let the pod run on a different node, due to the nodeSelector</em>".</p><h2 id="not-so-fast-">Not so fast!</h2><p>"<em>Try microk8s</em>", the boss said, "<em>it happens all the time...</em>"</p><p>So I did. Grumbling about how much harder it is to setup a multi-node microk8s environment, I used <a href="https://multipass.run">Multipass</a> to create 2 Ubuntu 20.04 VMs, and then followed the <a href="https://microk8s.io/docs/clustering">instructions re setting up a microk8s cluster</a>.</p><p>Sure enough, as it turns out, when I examined the microk8s PV, there was no nodeSelector. Microk8s, it turns out, uses a simple hostPath-type provisioner!</p><h2 id="where-s-my-data">Where's my data?</h2><p>So this presents a problem for any application deployed on a multi-node microk8s cluster, as well as any other cluster using a hostPath-based storage provisioner. We came up with what I think is an elegant solution though..</p><p>This command will return the current node of a pod (<em>provided that pod has been scheduled</em>):</p><pre><code>kubectl get pods &lt;podname&gt; -o jsonpath='{ ..nodeName }'</code></pre><p>And <strong>this</strong> command will patch a deployment, adding a nodeSelector:</p><pre><code>kubectl patch deployments &lt;deploymentname&gt; -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "&lt;nodename&gt;"}}}}}'</code></pre><p>Combined, we get this neat little command, a variation which is now featured on the Portainer install docs:</p><pre><code>kubectl patch deployment -n $NAMESPACE $DEPLOYMENT -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n $NAMESPACE -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of $DEPLOYMENT pod; exit 1)</code></pre><p>It should be noted that pinning a pod to a node obviously reduces resiliency in the event that a node fails, and something like this shouldn't be attempted seriously in production. If you're using microk8s though, you're probably <em>not</em> in serious production, so go wild!</p><p>BTW, this is what I do, all day, every day. I enjoy it, and I'm good at it. If this sort of stuff is what you need, I'd be interested to <a href="https://www.funkypenguin.co.nz/work-with-me">work with you</a>.</p>]]></content:encoded></item><item><title><![CDATA[Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error]]></title><description><![CDATA[I bought the wrong iPad Magic Keyboard, but discovered a neat trick which made it worth hanging onto]]></description><link>https://www.funkypenguin.co.nz/blog/why-i-wont-be-returning-the-uk-english-ipad-magic-keyboard-i-bought-in-error/</link><guid isPermaLink="false">5fc717b3b27d2a01438d6fa3</guid><category><![CDATA[blog]]></category><category><![CDATA[ipad]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Tue, 19 May 2020 09:19:06 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1589070230454-24cb4ddf91e4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDl8fHxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1589070230454-24cb4ddf91e4?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDl8fHxlbnwwfHx8&ixlib=rb-1.2.1&q=80&w=2000" alt="Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error"><p>I've been spending a lot more time with my iPad Pro, since the screen of my Macbook Pro cracked a few days into New Zealand's COVID-19 lockdown.</p><p>The 2020 iPad Pro itself was a long-awaited purchase, and it replaced my 2017 9.7" iPad Pro (<em>which I didn't use much, since the screen was too small for comfort</em>)</p><p>Enticed by posts like the one below (❤️ <em>ya' sweet-setup!</em>), I pulled the trigger on ordering the Magic Keyboard to replace the Smart Folio I'd originally bought (<em>its up for sale if you're interested</em>)</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://thesweetsetup.com/magic-keyboard-turning-the-ipad-into-something-new/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Magic Keyboard: Turning the iPad Into Something New – The Sweet Setup</div><div class="kg-bookmark-description">Our accounting office is right next to a Telus store full of Android and Windows fanatics. I haven’t been able to get any person in the store to even consider an iPhone or Mac for themselves, let alone convince them the iPad is a great business device. The Magic Keyboard is the first accessory that …</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://thesweetsetup.com/apple-touch-icon.png" alt="Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error"><span class="kg-bookmark-author">Josh Ginter</span><span class="kg-bookmark-publisher">The Sweet Setup</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://thesweetsetup.com/wp-content/uploads/2020/04/iPad-Magic-Keyboard-26.jpeg" alt="Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error"></div></a></figure><p>When the keyboard finally arrived, it didn't disappoint. The keys are luxurious and "<em>tappy</em>", unlike the cloth-covered smart folio case, and the backlight is just what I needed for late-night geeking. The trackpad <strong>is</strong> a gamechanger for serious work away from my desk.</p><p>However..</p><p>It turns out that the default keyboard layout offered on the Apple Store in New Zealand is UK English, and not US English, as evidenced below:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://static.funkypenguin.co.nz/Magic_Keyboard_for_12.9-inch_iPadPro_4thGeneration_British_English_-_Apple_NZ_2020-05-19_15-25-20.png" class="kg-image" alt="Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error"><figcaption>The NZ Apple Store's default language for the Magic Keyboard</figcaption></figure><p>For some reason, Magic Keyboard for the 11-inch gets to default to US English.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://static.funkypenguin.co.nz/magic_keyboard_-_Apple_NZ_2020-05-19_15-45-41.png" class="kg-image" alt="Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error"><figcaption>Got a puny 11-inch iPad? US English for you!</figcaption></figure><p>Upset that my new keyboard (which costs almost as much as a new entry-level iPad!) had smaller keys than its US counterpart, not to mention extra keys I didn't need, I contacted Apple to arrange a return, and placed my order for the US English version instead, resigned to wait the 3-4 weeks for delivery. Apple agreed that I could continue to use the keyboard, provided I was able to return it undamaged on receipt of my replacement.</p><h2 id="what-changed-my-mind">What changed my mind</h2><p>So what changed my mind? I've been trialling using the "<a href="https://apps.apple.com/us/app/blink-shell-mosh-ssh-client/id1156707581">Blink Shell</a>" app for SSH shell on the iPad. My first choice, <a href="https://termius.com/">Termius</a>, annoyingly doesn't permit text-selection using iPadOS's cursor. I'm less than pleased that they've not responded at all to my request, either:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet" data-width="550"><p lang="en" dir="ltr">Hey <a href="https://twitter.com/TermiusHQ?ref_src=twsrc%5Etfw">@TermiusHQ</a> , any plans to add cursor support to the iPadOS app? I&#39;d love to click-drag to select a block of text from the console...</p>&mdash; David Young (@funkypenguin) <a href="https://twitter.com/funkypenguin/status/1254522796517908481?ref_src=twsrc%5Etfw">April 26, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p>Blink is certainly less polished than Termius, and I still haven't mastered SSH agent-forwarding, but it <strong>does</strong> let me click and drag with my cursor to select large blocks of text.</p><p>The killer feature of Blink here is that it allows you to create <strong>any</strong> custom keymappings. So I discovered that I can remap the otherwise useless symbol at the top left of the British keyboard, to the ESC key!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://static.funkypenguin.co.nz/remap-silly-british-key-in-blink-to-escape.png" class="kg-image" alt="Why I won't be returning the (UK English) iPad Magic Keyboard I bought in error"><figcaption>Go, go, Blink!</figcaption></figure><p>Now not only do I have an escape key which I can use in all sorts of terminal sequences, but I haven't lost the use of any of the <em>other</em> keys (<em>including the ~ key, which would have been my go-to escape key on the US English version</em>).</p><p>So, I just pushed the "cancel order" button on my pending US English Magic Keyboard, and I'm now resigned to these smaller English buttons, in return for having the next-best thing to a "physical" escape key!</p>]]></content:encoded></item><item><title><![CDATA[Self-signed certificates on MutatingWebhooks require double-encryption]]></title><description><![CDATA[<p>I’m using <a href="https://github.com/cybozu-go">Cybozu</a>’s <a href="https://github.com/cybozu-go/topolvm">TopoLVM</a> to provide local LVM-based storage to a bare-metal Kubernetes cluster, in an intelligent fashion.</p><p>I’m also using Jetstack’s <a href="https://cert-manager.io">cert-manager</a> with the <code>--namespace</code> argument, to watch for certificate resources in a particular namespace only, so I wasn’t able to use cert-manager with</p>]]></description><link>https://www.funkypenguin.co.nz/blog/self-signed-certificate-on-mutating-webhook-requires-double-encryption/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7049</guid><category><![CDATA[note]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Sun, 01 Mar 2020 05:22:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1487541711790-6d41c9d873dd?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1487541711790-6d41c9d873dd?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Self-signed certificates on MutatingWebhooks require double-encryption"><p>I’m using <a href="https://github.com/cybozu-go">Cybozu</a>’s <a href="https://github.com/cybozu-go/topolvm">TopoLVM</a> to provide local LVM-based storage to a bare-metal Kubernetes cluster, in an intelligent fashion.</p><p>I’m also using Jetstack’s <a href="https://cert-manager.io">cert-manager</a> with the <code>--namespace</code> argument, to watch for certificate resources in a particular namespace only, so I wasn’t able to use cert-manager with TopoLVM, which is normally a pre-requisite.</p><p>The <a href="https://github.com/cybozu-go/topolvm/tree/master/deploy">deployment docs</a> tell me that I can avoid cert-manager if I use a self-signed certificate for the TopoLVM mutatingwebhook, which I thought wouldn’t be too difficult. I ran the following to generate the necessary cert, key, and cacert (<em>valid for 100 years</em>):</p><pre><code>openssl genrsa -out rootCA.key 4096

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 35600 -out rootCA.crt

openssl genrsa -out controller.topolvm-system.svc.key 2048 

openssl req -new -sha256 -days 36500  -key controller.topolvm-system.svc.key -subj '/CN=controller.topolvm-system.svc' -out controller.topolvm-system.svc.csr

openssl x509 -req -in controller.topolvm-system.svc.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out controller.topolvm-system.svc.crt -days 36500 -sha256

</code></pre><p>The documentation said to add the <code>caBundle</code> field to the mutatingwebhook YAML in PEM format, so I initially added the entire <code>rootCA.crt</code>, including <code>-----BEGIN CERTIFICATE-----</code> and <code>-----END CERTIFICATE-----</code>. This failed due to bad base64 encoding, so I removed the <code>BEGIN</code> and <code>END</code> lines, and the PEM data from the certificate was accepted as valid base64.</p><p>However, the webhook wasn’t trusted, and any pods I deployed failed with messages about <code>certificate signed by unknown authority</code>.</p><p>Turns out what was required was to base64-encode the PEM file, and paste the <strong>resulting</strong> base64-encoded string into <code>caBundle</code>. I.e, I set <code>caBundle</code> to the output of <code>cat rootCA.crt | base64</code>.</p><p>I found some <a href="https://github.com/kubernetes/kubernetes/issues/61171">kindred fellow-sufferers</a>, whose confusion and eventual frustration echo my own!</p>]]></content:encoded></item><item><title><![CDATA[Istio Namespace Isolation]]></title><description><![CDATA[<p>I’m working on a project which requires a CockroachDB instance in multiple namespaces (prod/uat/dev), in an Isio-enabled Kubernetes cluster.</p><p>There are currently some pretty significant drawbacks to <a href="https://github.com/istio/istio/issues/9784">using Istio with “headless TCP” services</a>, one of which being you can only have a single instance of a service</p>]]></description><link>https://www.funkypenguin.co.nz/blog/istio-namespace-isolation-tricks/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7048</guid><category><![CDATA[istio]]></category><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Sat, 14 Dec 2019 04:19:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1516912938509-1fee231102aa?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1516912938509-1fee231102aa?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Istio Namespace Isolation"><p>I’m working on a project which requires a CockroachDB instance in multiple namespaces (prod/uat/dev), in an Isio-enabled Kubernetes cluster.</p><p>There are currently some pretty significant drawbacks to <a href="https://github.com/istio/istio/issues/9784">using Istio with “headless TCP” services</a>, one of which being you can only have a single instance of a service with a specific TCP port, within the entire service mesh. So, no multiple cockroachDB instances on port 26257.</p><p>The problem hinges on the fact that for TCP-based services, Istio can only intercept traffic to a given TCP-based service by recognizing its port number.</p><p>A useful workaround to this issue, however, is Istio’s namespace isolation. Namespace isolation allows you to configure Istio to only configure the envoy sidecar proxies to access a subset of services on the mesh, within the scope of a namespace.</p><h2 id="how-would-you-use-istio-namespace-isolation">How would you use Istio namespace isolation?</h2><p>My project is an easy example. Say you have 3 namespaces (<em>dev/uat/prod</em>), each of which should talk to their own CockroachDB instance (in their namespace), but be blissfully unaware of any <strong>other</strong> CockroachDB instances in other namespaces.</p><p>What you want is a <strong>“Sidecar”</strong> custom resource, which limits the scope of the service mesh config deployed to your sidecars. For example, the following CR restricts istio-proxy sidecars to being “aware” of only other services in the same namespace, or in the istio-system namespace:</p><pre><code>kind: Sidecar
metadata:
  name: default
  namespace: istio-system
spec:
  egress:
  - hosts:
    - "./*"
    - "istio-system/*"
</code></pre><p>If this Sidecar is applied within the “cluster mesh” namespace (<em>istio-system by default</em>), and it doesn’t explictly match any source workloads, then the default will apply to all namespaces. This is a simple way to create a default namespace isolation policy.</p><p>Or so I thought…</p><p>Turns out there’s a wrinkle, that’s outlined in <a href="https://istio.io/docs/reference/config/networking/sidecar/">the docs</a>:</p><blockquote>NOTE 2: A Sidecar configuration in the MeshConfig root namespace will be applied by default to all namespaces without a Sidecar configuration. This global default Sidecar configuration should not have any workloadSelector.</blockquote><p>The subtle implication here is that if you:</p><p>a) Setup a default Sidecar resource in the <code>istio-system</code> namespace, and b) Create a Sidecar resource in your namespace with a workloadSelector which doesn’t include all pods (<em>for example, permitting a pod named “logmaster” to connect to pod “logcatcher” in namespace “logmonkey”</em>)</p><p>Then any pods <strong>not</strong> matched by the aforementioned workloadSelector will have <strong>NO</strong> Sidecar resource applied, and thus <strong>no namespace isolation</strong>.</p><p>To re-iterate:</p><p>If you want namespace isolation, either specify a default Sidecar resource in your MeshConfig root namespace (<em>default istio-system</em>) and don’t create any more Sidecar resources in your target namespace, <strong>OR</strong> create a default Sidecar resource in each namespace, followed by more workload-specific Sidecar resources.</p><p>In my case, since I legitimately need <strong>some</strong> pods in each prod/uat/dev namespace to be able to communicate with select services in different namespaces, I’ve created a default Sidecar resource in each namespace, which restricts all pods to commuicating only within their own namespace:</p><pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
  name: istio-config
  namespace: prod
spec:
  egress:
  - hosts:
    - "./*"
</code></pre><p>Having established this per-namespace default, I can now apply more lenient Sidecar resources to only a subset of my pods, as in the following example:</p><pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
  name: permit-logs-to-logmonkey
spec:
  egress:
  - hosts:
    - ./*
    - logmonkey/logcatcher.logmonkey.svc.cluster.local
  workloadSelector:
    labels:
      app.kubernetes.io/name: batman-app
</code></pre><p>Now I can happily spin up multiple headless TCP services in separate namespaces, without having Istio arbitrarily forward the traffic to the first-available service listening on a particular port :)</p><h2 id="update-29-apr-2020-">Update (29 Apr 2020)</h2><p>Per <a href="https://github.com/istio/istio/issues/15329">these</a> <a href="https://discuss.istio.io/t/istio-namespace-isolation/6226/3">threads</a>, the isolation described above is only applied if you’ve also set <code>global.outboundTrafficPolicy.mode</code> to <code>REGISTRY_ONLY</code>, rather than the more common value of <code>ALLOW_ANY</code>!</p><p>Got any questions / suggestions? I’m an Istio-n00b too, <a href="https://twitter.com/funkypenguin">hit me up!</a></p>]]></content:encoded></item><item><title><![CDATA[Automatically convert deprecated APIs for Kubernetes 1.16]]></title><description><![CDATA[<p>When Kubernetes 1.16 was released, <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/">several beta APIs were deprecated</a> in favor of their stable counterparts. This means that your deployments will <strong>fail</strong>, if you’re still using the old APIs (<em>as hundreds of public helm charts are!</em>)</p><p>As release manager Lachlan Evenson mentioned in his <a href="https://kubernetespodcast.com/episode/072-kubernetes-1.16/">Kubernetes Podcast interview</a></p>]]></description><link>https://www.funkypenguin.co.nz/blog/transmogrify-deprecated-kubernetes-apis/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7047</guid><category><![CDATA[kubernetes]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Sat, 05 Oct 2019 16:25:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1596248723887-3b002a4c1c90?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MXwxMTc3M3wwfDF8c2VhcmNofDN8fHxlbnwwfHx8&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1596248723887-3b002a4c1c90?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MXwxMTc3M3wwfDF8c2VhcmNofDN8fHxlbnwwfHx8&ixlib=rb-1.2.1&q=80&w=2000" alt="Automatically convert deprecated APIs for Kubernetes 1.16"><p>When Kubernetes 1.16 was released, <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/">several beta APIs were deprecated</a> in favor of their stable counterparts. This means that your deployments will <strong>fail</strong>, if you’re still using the old APIs (<em>as hundreds of public helm charts are!</em>)</p><p>As release manager Lachlan Evenson mentioned in his <a href="https://kubernetespodcast.com/episode/072-kubernetes-1.16/">Kubernetes Podcast interview</a>, the intention of the deprecation was to “forcefully nudge” the community towards using the stable APIs, already available for 7 versions:</p><pre><code>LACHLAN EVENSON: Let me start by saying that this is the first release that we've had a
big API deprecation, so the proof is going to be in the pudding.

CRAIG BOX: Yes.

LACHLAN EVENSON: And we do have an API deprecation policy. So as you mentioned, Craig, 
the Apps v1 has been around since 1.9. If you go and read the API deprecation policy, 
you can see that we have a three-release announcement. So around the 1.12, 1.13 time 
frame, we actually went and announced this deprecation, and over the last few releases,
we've been reiterating that.

But really, what we want to do is get the whole community on those stable APIs because it
really starts to become a problem when we're supporting all these many now-deprecated APIs,
and people are building tooling around them and trying to build reliable tooling. So this
is the first test for us to move people, and I'm sure it will break a lot of tools that
depend on things. But I think in the long run, once we get onto those stable APIs, people 
can actually guarantee that their tools work, and it's going to become easier in the long 
run.

So we've put quite a bit of work in announcing this. There was a blog sent out about six 
months ago by Valerie Lancey in the Kubernetes community which said, hey, go use 'kubectl
convert', where you can actually say, I want to convert this resource from this API version 
to that API version, and it actually makes that really easy. But I think there'll be some 
problems in the ecosystem, but we need to do this going forward, pruning out the old APIs 
and making sure that people are on the stable ones.
</code></pre><p>So what are you to do if you want to deploy a public helm chart, which has not yet been updated for the deprecated APIs?</p><p>You <strong>could</strong> fork, update, and publish your own helm repo. It’s <a href="https://funkypenguin.github.io/helm-charts/">do-able</a>, but that’s a lot of work!</p><p>Eventually annoyed at the process, I wrote a <a href="https://github.com/funkypenguin/k8s-transmogrifier">little script</a> to “transmogrify” Kubernetes manifests, replacing the deprecated APIs with the stable ones.</p><h2 id="how-does-it-work">How does it work?</h2><p>You simply point the script at your manifest directory, and it if finds any of the deprecated APIs, it’ll replace them inline with the stable versions.</p><p>Here’s an example:</p><pre><code>root@cn1:~# /tmp/transmogrify_for_k8s_1.16.sh /tmp/ansible.bbGF94temp/prometheus/templates/
Deprecated API found in [/tmp/ansible.bbGF94temp/prometheus/templates/kube-state-metrics-deployment.yaml].. Transmogrifying...
Deprecated API found in [/tmp/ansible.bbGF94temp/prometheus/templates/alertmanager-deployment.yaml].. Transmogrifying...
Deprecated API found in [/tmp/ansible.bbGF94temp/prometheus/templates/node-exporter-daemonset.yaml].. Transmogrifying...
Deprecated API found in [/tmp/ansible.bbGF94temp/prometheus/templates/server-deployment.yaml].. Transmogrifying...
root@cn1:~#
</code></pre><p>Running the script a second time will result in no output, since there are no longer any deprecated APIs:</p><pre><code>root@cn1:~# /tmp/transmogrify_for_k8s_1.16.sh /tmp/ansible.bbGF94temp/prometheus/templates/
root@cn1:~#
</code></pre><h2 id="here-be-dragons">Here be dragons ?</h2><p>I’ve made a lot of assumptions here - for one thing, the script assumes that each Kubernetes element is in its own YAML file. Weird things might happen if you point the script at a .yaml file combining multiple elements.</p>]]></content:encoded></item><item><title><![CDATA[Geo-restricting Azure services with Front Door]]></title><description><![CDATA[<p>I’ve just finished a design for a client which required restricting various web endpoints to NZ IP addresses only.</p><p>I hadn’t used <a href="https://azure.microsoft.com/en-us/services/frontdoor/">Azure Front Door</a> prior to this project, because frankly it seemed like overkill for a small, NZ-based platform. I figured, “<em>meh, I’ll just whitelist the</em></p>]]></description><link>https://www.funkypenguin.co.nz/blog/geo-restricting-azure-services-with-front-door/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7046</guid><category><![CDATA[note]]></category><category><![CDATA[azure]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Fri, 04 Oct 2019 03:02:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1498536182014-41657c87657b?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1498536182014-41657c87657b?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Geo-restricting Azure services with Front Door"><p>I’ve just finished a design for a client which required restricting various web endpoints to NZ IP addresses only.</p><p>I hadn’t used <a href="https://azure.microsoft.com/en-us/services/frontdoor/">Azure Front Door</a> prior to this project, because frankly it seemed like overkill for a small, NZ-based platform. I figured, “<em>meh, I’ll just whitelist the IP ranges using an NSG, and be done with it. How hard can it be?</em>”</p><p>Well…</p><p>It turns out little ol’ NZ has <a href="https://lite.ip2location.com/new-zealand-ip-address-ranges">7,295,124 IP addresses</a>. (<em>That’s just under <a href="http://archive.stats.govt.nz/tools_and_services/population_clock.aspx">double our population</a>!</em>) To whitelist these IPs, I’d need <a href="https://www.nirsoft.net/countryip/nz_date.html">221 individual IP entries</a>! (<em>point of geekiness - the first range assigned to NZ was 31 years ago, on 10/08/88, to Massey University</em>)</p><p>So, the IP whitelist option was out.</p><p>But Azure Front Door was a viable approach. It’s got an attractive, minimal configuration (<em>unlike the beast that is Application Gateway v1</em>), and it supports fancy WAF rules, as well as <strong>custom</strong> rules, for which geo-location of the remote source is a configurable filter:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Edit_custom_rule_-_Microsoft_Azure_2019-10-03_22-19-46.png" class="kg-image" alt="Geo-restricting Azure services with Front Door"></figure><p>So we setup an Azure Front Door, with a dirty ol’ application gateway as its backend. Then in the NSG protecting the application gateway (<em>applied at the subnet level</em>), we whitelisted the IP ranges which Front Door traffic would be <a href="https://docs.microsoft.com/en-us/azure/frontdoor/front-door-faq#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door">expected to ingress from</a>, and locked all you unwashed barbarians outside the gates.</p><p>This all went mostly according to plan, but here are some gotchas to save you time:</p><h2 id="gotchas-to-avoid">Gotchas to avoid</h2><h3 id="avoid-using-frontdoor-managed-certificates">Avoid using Frontdoor-managed certificates</h3><p>Front Door will provide you with a free SSL certificate for your frontends. But I’ve had poor experience making it work. Mostly, choosing this option causes the “Updating” process to take 2+ hours.</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Update_custom_domain_-_Microsoft_Azure_2019-10-03_22-16-07.png" class="kg-image" alt="Geo-restricting Azure services with Front Door"></figure><p>Instead, the way to go is to put your certificates into KeyVault (<em>you should be doing this already</em>), and then to follow the following, super-user-unfriendly process:</p><ol><li>Launch a cloud shell from the Azure portal, select PowerShell (<em>of course, you weren’t using it by default, were you?</em>), and paste in the following incantation: <code>New-AzureRmADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037"</code> (<em>these are the instructions directly from Azure! They couldn’t make this into a button?</em>)</li></ol><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Update_custom_domain_-_Microsoft_Azure_2019-10-03_22-27-48.png" class="kg-image" alt="Geo-restricting Azure services with Front Door"></figure><ol><li>Now that the service principal has been added to your AADC, create a new access policy, and grant it the <code>secret.get</code> permission in Key Vault.</li></ol><h3 id="avoid-hard-coding-the-backend-host-header">Avoid hard-coding the backend host header</h3><p>When you setup your backend hosts, you’re prompted to choose a backend host header. If you <em>set</em> this value, then the “Host” field on every forwarded request will be rewritten to what you chose. If you’ve routing more than one domain name to the backend, then this static value will break ability of the backend host to determine what content to send, based on the request header.</p><p>Rather, leave the backend host header blank, so that Front Door will just pass through the Host header from the original request:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Update_backend_-_Microsoft_Azure_2019-10-03_22-32-13.png" class="kg-image" alt="Geo-restricting Azure services with Front Door"></figure><h3 id="avoid-omitting-the-host-header-with-curl-when-you-test">Avoid omitting the host header with curl when you test</h3><p>I baffled myself for a while after implementing the above, because I was running <code>curl &lt;hostname of frontdoor-protected-website&gt;</code>, and receiving a 404 error from my backend in response. Turns out, you have to submit a host header of some sort, so <code>curl -H 'Host: myawesomesite.com' &lt;hostname of frontdoor-protected-website&gt;</code> did the trick.</p>]]></content:encoded></item><item><title><![CDATA[Enabling DKIM on Office365 is easy (as bathing a cat)]]></title><description><![CDATA[<h2 id="how-hard-can-it-be">How hard can it be?</h2><p><strong>Super</strong> hard, it turns out. But I hope this post will make it easier for <strong>you</strong>.</p><p>In order to produce some documentation for a client on setting up DKIM under Office365, I undertook to migrate a test domain of mine to Office365, and set it</p>]]></description><link>https://www.funkypenguin.co.nz/blog/enabling-dkim-on-office365-easy-as-bathing-a-cat/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7045</guid><category><![CDATA[note]]></category><category><![CDATA[office365]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Tue, 02 Jul 2019 14:42:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1531425300797-d5dc8b021c84?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<h2 id="how-hard-can-it-be">How hard can it be?</h2><img src="https://images.unsplash.com/photo-1531425300797-d5dc8b021c84?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Enabling DKIM on Office365 is easy (as bathing a cat)"><p><strong>Super</strong> hard, it turns out. But I hope this post will make it easier for <strong>you</strong>.</p><p>In order to produce some documentation for a client on setting up DKIM under Office365, I undertook to migrate a test domain of mine to Office365, and set it up. This way, I’d produce better documentation, having had personal, hands-on experience (<em>and screenshots</em>). How hard could it be?</p><h2 id="friends-don-t-let-friends-use-non-working-email-addresses">Friends don’t let friends use non-working email addresses</h2><p>First, I made a noob mistake, and bought my Office365 subscription with the email address I <strong>intended</strong> to use with it (<em>i.e., a currently non-working email address</em>). After my purchase, when I attempted to sign in, I received a mysterious error and was advised to “<em>try again later</em>”. Of course, not having a working email address, I wasn’t able to do an “account recovery” or reset my password in any way.</p><p>Fortunately, Safari had saved my (<em>randomly generated</em>) password to iCloud Keychain, and I was able to recover it and login the next day.</p><h2 id="cname-schenanigans">CNAME schenanigans</h2><p>Second, to use DKIM, you need some DNS records added to your domain. Unlike SPF, which is <a href="https://docs.microsoft.com/en-us/office365/securitycompliance/set-up-spf-in-office-365-to-help-prevent-spoofing">relatively easy</a> to setup on Office365, <a href="https://docs.microsoft.com/en-us/office365/securitycompliance/use-dkim-to-validate-outbound-email">DKIM requires some mental gymnastics</a> to identify what records to add.</p><p>Here’s the CNAMES I had to add to protect elpenguino.net:</p><pre><code>selector1._domainkey --&gt; selector1.selector1-elpenguino-net._domainkey.elpenguinonet.onmicrosoft.com
selector1._domainkey --&gt; selector2-elpenguino-net._domainkey.elpenguinonet.onmicrosoft.com
</code></pre><p>Part of the CNAME destination is my <em>initial domain</em>, and the other is the <em>domainGUID</em>. Neither match my <strong>actual</strong> domain.</p><h2 id="powershell-much">Powershell, much?</h2><p>So after I figured out the magic DNS records needed, I thought I’d be able to turn on DKIM signing, and get cracking. No. Turns out that for reasons unknown (<em>but apparently <a href="https://www.oxcrag.net/2018/08/31/fixing-no-dkim-keys-saved-for-this-domain-in-eop-and-office365/">rather common</a></em>), to enable DKIM signing for my domain, I needed to break out some PowerShell.</p><p><a href="https://brew.sh">Homebrew</a> saved me, and with a quick <code>brew cask install powershell</code>, I had a PowerShell CLI on my Macbook.</p><p>Assuming you’re in a similar situation, you’ll want to run the following commands via PowerShell:</p><ol><li></li></ol><p>Prepare to login: <code>$UserCredential = Get-Credential</code></p><ol><li></li></ol><p>Actually login: <code>$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection</code></p><ol><li></li></ol><p>Do something else: <code>Import-PSSession $Session -DisableNameChecking</code></p><ol><li></li></ol><p>Enable DKIM signing for yourdomain.com:; <code>New-DkimSigningConfig -DomainName "yourdomain.com" -Enabled $true</code></p><p>Here’s my error-filled beginner’s attempt to drive PowerShell:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/funkypenguin__usrlocalmicrosoftpowershell6pwsh_exit__usrlocalmicrosoftpowershell6pwsh__11048_2019-07-02_22-22-38.png" class="kg-image" alt="Enabling DKIM on Office365 is easy (as bathing a cat)"></figure><p>So, <strong>finally</strong> I can navigate to my Office365 Exchange Admin settings, and enable DKIM signing.</p><p>Best of luck to you, fair adventurer!</p>]]></content:encoded></item><item><title><![CDATA[Docker RBAC using Portainer]]></title><description><![CDATA[<p>I was recently invited by the <a href="https://www.portainer.io">Portainer</a> team to speak at the <a href="https://www.meetup.com/Docker-Auckland/">Docker Auckland meetup,</a> about the new <a href="https://www.portainer.io/products-services/portainer-extension-software/role-based-access-control/?cl">RBAC extension</a> for Portainer.</p><p>RBAC is a standard feature of the super-expensive, super-enterprisey <a href="https://success.docker.com/article/rbac-example-overview">Docker Enterprise Edition</a>, but if you want more than admin-only access to your Docker CE cluster, you’re (<em>until</em></p>]]></description><link>https://www.funkypenguin.co.nz/blog/docker-rbac-with-portainer/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7044</guid><category><![CDATA[note]]></category><category><![CDATA[docker]]></category><category><![CDATA[portainer]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Wed, 19 Jun 2019 15:35:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1566773573428-f1ee05187c1c?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1566773573428-f1ee05187c1c?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Docker RBAC using Portainer"><p>I was recently invited by the <a href="https://www.portainer.io">Portainer</a> team to speak at the <a href="https://www.meetup.com/Docker-Auckland/">Docker Auckland meetup,</a> about the new <a href="https://www.portainer.io/products-services/portainer-extension-software/role-based-access-control/?cl">RBAC extension</a> for Portainer.</p><p>RBAC is a standard feature of the super-expensive, super-enterprisey <a href="https://success.docker.com/article/rbac-example-overview">Docker Enterprise Edition</a>, but if you want more than admin-only access to your Docker CE cluster, you’re (<em>until now</em>) out of luck.</p><p><strong>However</strong>, if you’re using Portainer to administer your Docker enviroment (<em>and if not, why not?</em>), you can now get all that Enterprisey goodness, for a whopping $US9.95 per year (<em>per portainer instance, and each portainer instance can manage multiple swarms</em>).</p><p>Since I’ve recently been introducting my kids to Star Wars, I felt it’d be useful to explain to this room-full-of-geeks how RBAC works, in a familiar context, and so I demonstrated RBAC using Star Wars characters (<em>please don’t sue me, Disney!</em>)</p><p><strong>Contents</strong></p><ul><li><a href="#organizational-structure">Organizational structure</a></li><li><a href="#grant-teams-access-to-their-own-endpoints">Grant teams access to their own endpoints</a></li><li><a href="#examine-effective-permissions">Examine effective permissions</a></li><li><a href="#user-experience">User experience</a></li><li><a href="#endpoint-groups">Endpoint Groups</a></li><li><a href="#available-roles">Available Roles</a></li><li><a href="#what-have-we-learned">What have we learned?</a></li><li><a href="#tell-me-more">Tell me more!</a></li></ul><h2 id="organizational-structure">Organizational structure</h2><p>Let’s start with an organizational structure. Assume we have the following teams and endpoints.. (<em>Portainer’s term for a docker / swarm instance</em>)</p><ul><li>Jedi Council, whose endpoint is named “Tython”</li><li>Galactic Republic, whose swarm cluster is named “Coruscant”</li><li>Galactic Empire, whose swarm cluster is named “Death Star”</li><li>Rebel Alliance, whose swarm cluster is named “Yavin-4”</li><li>Nerf Herders, whose swarm cluster is named “Tatooine”</li></ul><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/portainer-rbac-org-structure.png" class="kg-image" alt="Docker RBAC using Portainer"></figure><h2 id="grant-teams-access-to-their-own-endpoints">Grant teams access to their own endpoints</h2><p>In general, each team should have administrative access to their own endpoint, so we grant each team the “Endpoint Administrator” role to their endpoint. So far, so good.</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/screencast_2019-06-19_12-54-38.gif" class="kg-image" alt="Docker RBAC using Portainer"></figure><h2 id="examine-effective-permissions">Examine effective permissions</h2><p>You’ll have noticed that some team members overlap. For example, <em>Han Solo</em> is a member of both <strong>Nerf Herders</strong> <em>and</em> <strong>The Rebel Alliance</strong>. Let’s check his effective permissions to see what access he has:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/screencast_2019-06-19_13-28-04.gif" class="kg-image" alt="Docker RBAC using Portainer"></figure><p>In this case, membership of the 2 teams has granted <em>Han Solo</em> the team’s level of access to each endpoint. Say the Leia didn’t quite trust Han-the-scoundrel on the production endpoint, and wanted to restrict his role to read-only activities. We’d achieve this by granting a user-specific role to <strong>han.solo</strong> in the <strong>Yavin-4</strong> endpoint. (<em>User-specific roles override team roles</em>)</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/screencast_2019-06-19_13-33-18.gif" class="kg-image" alt="Docker RBAC using Portainer"></figure><h2 id="user-experience">User experience</h2><p>How does this look, to Han? Let’s take a look.. we’ll login as <strong>han.solo</strong>, first attempt to administer the <strong>Tatooine</strong> endpoint (<em>for which Han is an Endpoint Administrator</em>), and then attempt to do the same to the <strong>Yavin-4</strong> endpoint.</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/screencast_2019-06-19_13-42-01.gif" class="kg-image" alt="Docker RBAC using Portainer"></figure><h2 id="endpoint-groups">Endpoint Groups</h2><p>We’ve seen how to restrict access to endpoints per team and per user. It’s also possible to group endpoints together, and to apply permissions at an endpoint-group level:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Portainer_2019-06-19_13-49-20.png" class="kg-image" alt="Docker RBAC using Portainer"></figure><p>Once the endpoints are grouped, apply a role to the endpoint group:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Portainer_2019-06-19_13-50-01.png" class="kg-image" alt="Docker RBAC using Portainer"></figure><p>In this example, I gave the user <strong>yoda</strong> the <em>Endpoint Administrator</em> role on the <em>heroes</em> endpoint group, which reflects on his effective permissions:</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/Portainer_2019-06-19_13-51-38.png" class="kg-image" alt="Docker RBAC using Portainer"></figure><h2 id="available-roles">Available Roles</h2><p>Portainer made a calculated decision to <strong>not</strong> allow the creation of custom roles. You get these 4 roles, and that’s it:</p><ul><li></li></ul><p><strong>Endpoint Administrator</strong>: The Endpoint Administrator has complete control over the resources deployed within a given endpoint, but is not able to make any changes to the infrastructure that underpins an endpoint (ie no host management), nor able to make any changes to Portainer internal settings</p><ul><li></li></ul><p><strong>Helpdesk</strong>: The Helpdesk role has read-only access over the resources deployed within a given endpoint but is not able to make any changes to any resource, nor open a console to a container, or make changes to a container’s volumes</p><ul><li></li></ul><p><strong>Standard User</strong>: The Standard User role has complete control over the resources that user deploys, or if the user is a member of a team, complete control over resources that users of that team deploy</p><ul><li></li></ul><p><strong>Read-Only User</strong>: A Read-Only User has read only access over the resources they are entitled to see (resources created by members of their team, and public resources)</p><h2 id="what-have-we-learned">What have we learned?</h2><p>So, what have we learned, young padawan? We’ve learned that to administer one or more clusters with effective user management, we can Portainer plus the RBAC extension, to provide a “single pane of glass” to our Docker Swarm enviroments. And that Han Solo can’t be trusted with a production endpoint!</p><h2 id="tell-me-more-">Tell me more!</h2><p>Here are some resources:</p><ul><li>My <a href="https://www.funkypenguin.co.nz/tag/portainer/">blog posts</a> tagged with portainer</li><li>My <a href="https://geek-cookbook.funkypenguin.co.nz/recipes/portainer/">geeky recipe</a> re running Portainer on Docker Swarm</li><li>Portainer’s <a href="https://www.portainer.io/blog/">blog</a></li><li>Portainer’s <a href="https://forums.portainer.io">support forums</a></li><li>Portainer’s <a href="https://join.slack.com/t/portainer/shared_invite/enQtNDk3ODQ5MjI2MjI4LWM1OWMzNmUxMTkxZjc1MmU0ZGIwOTllMWI2YzMyNGI2MjY5NmMxMzhkNTRkNGZkYWU3OTQxODUxMWRmZTE5NTM">Slack server</a></li><li>Portainer’s <a href="https://github.com/portainer/portainer">github repo</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Making changes stick requires attention]]></title><description><![CDATA[<p>In <a href="https://dzone.com/articles/the-design-of-engineering-culture">The Design of Engineering Culture - DZone DevOps</a>, a great read on “designing” your engineering culture, these two paragraphs jumped out at me:</p><blockquote>In addition to motivation, change requires attention. A fair amount of people’s attention is devoted to doing their work when everything is business as usual.</blockquote>]]></description><link>https://www.funkypenguin.co.nz/blog/making-changes-stick-requires-attention/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7043</guid><category><![CDATA[opinion]]></category><category><![CDATA[culture]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Mon, 13 May 2019 05:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1556761175-5973dc0f32e7?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1556761175-5973dc0f32e7?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Making changes stick requires attention"><p>In <a href="https://dzone.com/articles/the-design-of-engineering-culture">The Design of Engineering Culture - DZone DevOps</a>, a great read on “designing” your engineering culture, these two paragraphs jumped out at me:</p><blockquote>In addition to motivation, change requires attention. A fair amount of people’s attention is devoted to doing their work when everything is business as usual. Any extraordinary circumstances (such as financial stress, workplace conflicts, or outside personal issues) will deplete people’s attention budgets further. Be aware of the different factors outlined in Paloma Medina’s BICEPS model of the core needs and motivations of people in the workplace. If people feel that their needs aren’t being met, that can take their attention and make them less amenable to change. For example, if an engineer feels like they aren’t being treated equally (the “E” in BICEPS) by their manager and that is causing significant stress, it will be harder for them to focus on something that feels less important, such as remembering to use a new post-mortem template.</blockquote><blockquote>Company- or department-level stressors — such as restructuring, staff turnover, or organizational financial concerns — will reduce the overall capacity for change. Especially over the long term, bad habits can form, adversarial thinking can become ingrained, and people can become so burnt out that they lose most of their attention or capacity to change at an individual level. This sort of situation makes meaningful change more necessary but also more difficult to enact. In exceptionally stressful circumstances, you may need to wait until big picture issues have been resolved before trying to make meaningful changes — for example, don’t try to force a new post-mortem facilitation training program on people right after (or during) a big round of layoffs.</blockquote><p>I’ve worked in organizations where the company-level stressors (restructuring and staff turnover) impact staff so badly that despite <strong>wanting</strong> to change, the organisation is unable to muster the <strong>motivation</strong> to make changes stick.</p><p>Of course, half-hearted attempts at change, which fail due lack of support, simple feed the burn-out and make it harder for the <strong>next</strong> attempted change to succeed.​</p><p>The lesson here, I believe, is to be intentional about your company culture when things are going <strong>well</strong>, rather than reactively once you’re in the weeds.</p><blockquote>..Because an organization that doesn’t actively create the culture it wants will end up with a culture anyway. It will be the disorganized total of its employees’ thoughts and experiences–based on everything from how they’re treated to where they sit. - <a href="https://www.inc.com/brent-gleeson/why-culture-doesnt-just-beat-strategy-it-must-be-t.html">Brent Gleeson</a></blockquote>]]></content:encoded></item><item><title><![CDATA[The PowerPoint-Killer]]></title><description><![CDATA[<p><a href="https://mcdreeamiemusings.com/new-blog/2019/4/13/gsux1h6bnt8lqjd7w2t2mtvfg81uhx">This post</a> by <a href="https://twitter.com/mcdreeamie">James Thomas</a> struck a chord with me. The post details the loss of the space shuttle Columbia (including 7 humans) in 2006.</p><blockquote>The fact is we know that PowerPoint kills. Most often the only victims are our audience’s inspiration and interest. This, however, is the story</blockquote>]]></description><link>https://www.funkypenguin.co.nz/blog/the-powerpoint-killer/</link><guid isPermaLink="false">5fc717b3b27d2a01438d7042</guid><category><![CDATA[postmortem]]></category><category><![CDATA[writing]]></category><category><![CDATA[opinion]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Sat, 11 May 2019 05:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1571350062069-d9b582293bc4?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1571350062069-d9b582293bc4?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=2000&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="The PowerPoint-Killer"><p><a href="https://mcdreeamiemusings.com/new-blog/2019/4/13/gsux1h6bnt8lqjd7w2t2mtvfg81uhx">This post</a> by <a href="https://twitter.com/mcdreeamie">James Thomas</a> struck a chord with me. The post details the loss of the space shuttle Columbia (including 7 humans) in 2006.</p><blockquote>The fact is we know that PowerPoint kills. Most often the only victims are our audience’s inspiration and interest. This, however, is the story of a PowerPoint slide that actually helped kill seven people.</blockquote><ul><li><a href="#what-happened">​What happened</a></li><li><a href="#how-was-the-decision-made">How was the decision made?</a></li><li><a href="#how-was-this-critical-data-so-badly-mis-interpreted">How was this critical data so badly mis-interpreted?</a></li><li><a href="#the-powerpoint-killer-a-simple-document">The PowerPoint-killer: a simple document</a></li><li><a href="#real-world-example-of-the-triumph-of-words-vs-slides">Real-world example of the triumph of words vs slides</a></li></ul><h2 id="-what-happened">​What happened</h2><p>The shuttle suffered accidental damage on launch, but made it safely into space. The subsequent tragic loss resulted from the decision to attempt re-entry. The re-entry decision was significantly influenced by the presentation of technical data using PowerPoint, which lead to mis-interpretation of the risks…</p><blockquote>At eighty-two seconds into the launch a piece of spray on foam insulation (SOFI) fell from one of the ramps that attached the shuttle to its external fuel tank. As the crew rose at 28,968 kilometres per hour the piece of foam collided with one of the tiles on the outer edge of the shuttle’s left wing… It was impossible to tell from Earth how much damage this foam, falling nine times faster than a fired bullet, would have caused when it collided with the wing… There were a number of options.  The astronauts could perform a spacewalk and visually inspect the hull.  NASA could launch another Space Shuttle to pick the crew up.  Or they could risk re-entry.<br>​</blockquote><h2 id="how-was-the-decision-made">How was the decision made?</h2><blockquote>NASA officials sat down with Boeing Corporation engineers who took them through three reports; a total of 28 slides.. The salient point was whilst there was data showing that the tiles on the shuttle wing could tolerate being hit by the foam this was based on test conditions using foam more than 600 times smaller than that that had struck Columbia.</blockquote><p>Despite the data showing that the “production fault” was 600x more significant than any previous test “in dev”, NASA felt confident that the data showed there was not enough damage to put the crew’s lives in danger, and went ahead with re-entry.</p><p>On re-entry, the wing overheated, the shuttle disintegrated and was lost, along with the lives of all 7 crew.</p><h2 id="how-was-this-critical-data-so-badly-mis-interpreted">How was this critical data so badly mis-interpreted?</h2><blockquote>Edward Tufte, a Professor at Yale University and expert in communication reviewed the slideshow the Boeing engineers had given NASA, in particular the above slide. His findings were tragically profound.</blockquote><p>​Read the <a href="https://mcdreeamiemusings.com/new-blog/2019/4/13/gsux1h6bnt8lqjd7w2t2mtvfg81uhx">entire post</a>, or better yet, read <a href="https://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001yB">Tufte’s full report</a>. To be concise, the way that facts are presented via powerpoint (spacing, font sizing, intendation, titles, etc), and our “<em>powerpoint fatigue</em>” leads us to misleading or lazy conclusions. In this case, the (poor) formatting and presentation of the data led NASA to conclude that the Boeing engineers’s data showed minimal risk on re-entry.</p><h2 id="the-powerpoint-killer-a-simple-document">The PowerPoint-killer: a simple document</h2><p>Jeff Bezos enforces a “<a href="https://www.inc.com/carmine-gallo/jeff-bezos-bans-powerpoint-in-meetings-his-replacement-is-brilliant.html">no Powerpoint, narrative memos only</a>” rule for Amazon execs (<em>he makes attendees read memos at the start of the meeting</em>). ​</p><p>Basecamp has a similar strategy - new ideas must be written down as “<a href="https://github.com/basecamp/handbook/blob/master/how-we-work.md#pitches">pitches</a>”, and given time to percolate.</p><p>I enjoy writing (<em>obviously, you’re reading my blog!</em>), and the act of writing and editing helps me to clarify my own thoughts / arguments far more effectively than creating a handful of slides with bullet points would do.</p><h2 id="real-world-example-of-the-triumph-of-words-vs-slides">Real-world example of the triumph of words vs slides</h2><p>I was recently asked to present technical recommendations to a client re how to reduce their AWS bill while improving their database I/O performance. The client specifically prioritised content over presentation. I spent a few days preparing / “percolating” two carefully-structured, technically sound papers detailing my recommendations. The recommendations were supported by calculations in a simple spreadsheet. I delivered the documents to the client a day ahead of our presentation / Q&amp;A session (via video).</p><p>All of the Q&amp;A attendees read my documents in preparation for the presentation - we spent about <strong>30 seconds</strong> introducing the topic, and jumped directly into a technically deep discussion about the proposed solution, backed by the facts.</p><p>In summary, pre-presenting carefully prepared (and internally reviewed) “narrative” technical documents was universally praised by attendees as being “<em>easy for non-technical audience to understand</em>”, and delivering lasting value (<em>since the documents now form the basis of a scope of work</em>).</p>]]></content:encoded></item><item><title><![CDATA[Signing git commits from OSX]]></title><description><![CDATA[<p>When cleaning up issues/PRs in <a href="https://github.com/funkypenguin/geek-cookbook">Funky Penguin’s Geek’s Cookbook repository</a> today, I noticed that <a href="https://github.com/funkypenguin/geek-cookbook/pull/46/commits">PRs committed from the GitHub website included verified commits</a>, but my own commits (from my latop) were not verified.</p><p>Determined to correct this, I worked through GitHub’s documentation on <a href="https://help.github.com/en/articles/managing-commit-signature-verification">commit signature verification</a></p>]]></description><link>https://www.funkypenguin.co.nz/blog/signing-commits-from-osx/</link><guid isPermaLink="false">5fc717b3b27d2a01438d703f</guid><category><![CDATA[git]]></category><category><![CDATA[gpg]]></category><category><![CDATA[note]]></category><dc:creator><![CDATA[David Young]]></dc:creator><pubDate>Tue, 23 Apr 2019 15:36:00 GMT</pubDate><media:content url="https://static.funkypenguin.co.nz/2020/12/verified-commit-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://static.funkypenguin.co.nz/2020/12/verified-commit-1.png" alt="Signing git commits from OSX"><p>When cleaning up issues/PRs in <a href="https://github.com/funkypenguin/geek-cookbook">Funky Penguin’s Geek’s Cookbook repository</a> today, I noticed that <a href="https://github.com/funkypenguin/geek-cookbook/pull/46/commits">PRs committed from the GitHub website included verified commits</a>, but my own commits (from my latop) were not verified.</p><p>Determined to correct this, I worked through GitHub’s documentation on <a href="https://help.github.com/en/articles/managing-commit-signature-verification">commit signature verification</a>.</p><p>Here’s the basic path I followed:</p><p>Installed GPG:</p><pre><code>brew install gpg
</code></pre><p>I generated myself a key, using defaults and GitHub’s recommendation of 4096 bits for my keysize:</p><pre><code>[funkypenguin:~] 1 % gpg --full-generate-key
gpg (GnuPG) 2.2.15; Copyright (C) 2019 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: directory '/Users/funkypenguin/.gnupg' created
gpg: keybox '/Users/funkypenguin/.gnupg/pubring.kbx' created
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection?
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      &lt;n&gt;  = key expires in n days
      &lt;n&gt;w = key expires in n weeks
      &lt;n&gt;m = key expires in n months
      &lt;n&gt;y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: David Young
Email address: davidy@funkypenguin.co.nz
Comment: https://www.funkypenguin.co.nz
You selected this USER-ID:
    "David Young (https://www.funkypenguin.co.nz) &lt;davidy@funkypenguin.co.nz&gt;"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /Users/funkypenguin/.gnupg/trustdb.gpg: trustdb created
gpg: key 525EF417604A0541 marked as ultimately trusted
gpg: directory '/Users/funkypenguin/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/Users/funkypenguin/.gnupg/openpgp-revocs.d/3198CBB012FC221DD99BCA3F525EF417604A0541.rev'
public and secret key created and signed.

pub   rsa4096 2019-04-22 [SC]
      3198CBB012FC221DD99BCA3F525EF417604A0541
uid                      David Young (https://www.funkypenguin.co.nz) &lt;davidy@funkypenguin.co.nz&gt;
sub   rsa4096 2019-04-22 [E]

[funkypenguin:~] %
</code></pre><p>I listed my keys to find my key ID (in the example below, the key ID is <code>525EF417604A0541</code>)</p><pre><code>[funkypenguin:~] 130 % gpg --list-secret-keys --keyid-format LONG
gpg: WARNING: server 'gpg-agent' is older than us (2.2.10 &lt; 2.2.15)
gpg: Note: Outdated servers may lack important security fixes.
gpg: Note: Use the command "gpgconf --kill all" to restart them.
/Users/funkypenguin/.gnupg/pubring.kbx
--------------------------------------
sec   rsa4096/525EF417604A0541 2019-04-22 [SC]
      3198CBB012FC221DD99BCA3F525EF417604A0541
uid                 [ultimate] David Young (https://www.funkypenguin.co.nz) &lt;davidy@funkypenguin.co.nz&gt;
ssb   rsa4096/1CC86B12BD8AEEE6 2019-04-22 [E]

[funkypenguin:~] %
</code></pre><p>I exported the key, ASCII-armoured:</p><pre><code class="language-bash">[funkypenguin:~] % gpg --armour --export 525EF417604A0541
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFy+OLsBEADNKlxkp3tLddEK02BvHeqfoo8XxAgB87AM5hpvLkAbui8fnEgb
XhJ8v6SnhMNHPthsCq3LRVRggtPkIT0LemB2nibJgCqJhgzC5NE+Uu7WvDt5X860
GWZL7oqFnjq23VBAPNQRiDMiVsCwSqWSsyqCaJzL7UJZw8C88j05sEJEHx9anoBU
&lt;snipped for brevity&gt;
upLSmBs9tzGliP8+XYdPSSe8cQNEEv/sdrsj81VdBW9Zen2RSxEIqTLIHvbwljD0
jKj6Pat3l3oQfi1Be5DORer5r8YiVbdeKBm01vMp9pBkE4/VDUZrMsnQ27uc30sL
2m7atbQoAq3tCNJgZ+jjWx+oFG3hslEnVWe9lkhDpeGryVzQHb5pDYvLQTuTXRV/
d97VRAI+A7gQQFBT88Erdio1fmaa6VoytPBJIJEZ/viBrQGn
=rQDl
-----END PGP PUBLIC KEY BLOCK-----
[funkypenguin:~] %
</code></pre><p>I followed <a href="https://help.github.com/en/articles/adding-a-new-gpg-key-to-your-github-account">GitHub’s instructions</a> (Settings -&gt; SSH/GPG keys) to add the public key to my account</p><p>I configured my local git (globally to use my GPG key, and enabled GPGP signing of commits by default:</p><pre><code>git config --global user.signingkey 525EF417604A0541
git config --global commit.gpgsign true
</code></pre><p>Finally, since I don’t want to have to type in my passphrase every time I commit, I installed GPG Suite using brew:</p><pre><code>brew cask install gpg-suite-no-mail
</code></pre><p>The first time I ran a <code>git commit</code>, I was prompted by “Pinentry Mac” for my GPG passphrase. I typed my passphrase, and chose to save it in KeyChain.</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/2020/12/pinentry-mac.png" class="kg-image" alt="Signing git commits from OSX"></figure><p>I pushed a commit, and boom - a verified commit popped out the other end!</p><figure class="kg-card kg-image-card"><img src="https://static.funkypenguin.co.nz/2020/12/verified-commit.png" class="kg-image" alt="Signing git commits from OSX"></figure>]]></content:encoded></item></channel></rss>