We’ve recently deployed a Veeam Backup and Replication 7 platform, and needed to monitor the ongoing success of the backup / replication jobs. I identified a plugin which does most of what’s required, but seems to have 2 current shortcomings: 1. In-progress jobs trigger false warnings 2. Date calculation doesn’t always work, and produces false warnings
For April Fools this year, I decided to update my 2011 squid prank, and gain some experience using Vagrant at the same time. I rebuilt the entire environment using a Vagrantfile, which permits anybody to check out a few files and reproduce it. See https://github.com/funkypenguin/squidprank for the code.
I use FTPS with vsftpd to update my WordPress plugins. This means that the wordpress files don’t need to be writeable by the webserver user, which adds another layer of protection and separation. I make FTPS available to localhost only, and force SSL encryption end-to-end.
I spent the better part of an hour wondering why my postfix main.cf config changes didn’t apply on a OSX Mountain Lion server. Turns out that because “OSX Server” no longer exists (it’s just Server.app now), the postfix files specific to the Mail component of the server now live at:
I just jumped in at the end of a conversation on App.net about the latest NSA revelation, the undermining of worldwide encryption standards for the benefit of the self-appointed world-police. @isaiah pointed out that we (geekdom in general) don’t get as excited about civilian casualties in Iraq, or unsanctioned drone strikes.
My Debian Squeeze host started having trouble performing WordPress 3.5 core or plugin updates – in the error logs, I’d see messages like:
While implementing a new network for a customer, we took an existing HP 1810G 48-port switch under management. As per normal, we setup monitoring (Icinga) and graphing (Cacti), but while the switch responded to Cacti sysname polls (leading us to believe it was happy), it didn’t return any interface details, so we weren’t able to graph anything.
Seems a little dumb, and I’m not sure how other distributions deal with it, but if you install Cacti from RPM on CentOS, and then browse to your /cacti/ directory via HTTP, you’ll find that it dies with a segmentation fault. You know this is you if every other website on your host works, but everytime you go to your /cacti/ URL, your browser reports that the site is totally unavailable (as if apache weren’t even running).
I was asked to change a incoming NAT translation on a Cisco router for a customer today - however since this NAT was used to deliver all their internal email, it was never not in use, and I got the standard message below when trying to clear it: