For my last tech article of 2022 it's another flashback to my early years of coding and one of my first attempts at creating shared code to make my life a little easier, and why backups are of vital importance.
It was around 2011 from memory where I had started working with IBM MQ as part of my role (on the installation/configuration side) and had my first real into Pub/Sub. As a concept it was an eye-opener for me as the WORM (Write Once Read Many) approach seemed incredibly useful (leaving me wondering why this hadn't been covered throughout my time at University).
After working with the technology for some time I began theorising how I could use MQ/PubSub at home to make communication between my many systems easier, however licensing around the product would make this a non-starter. Thankfully while discussing my experience with the product with one of my colleagues over a coffee they mentioned I should take a look at MQTT which might meet my requirements.
MQTT seemed like a game changer for me, with it being small / lightweight / free, yet still having the functionality that I needed. It didn't take long to create test code that would leverage the new capability and provide a way for me to easily send/receive messages between the components.
In writing what would become my shared code I realised that not only were there alternatives to the MQTT software, but that the CLI options they require for tasks such as sending messages were different. As I wanted to create a flexible solution that meant I wouldn't need to rewrite things if I switched MQ brokers, I decided to enhance my code to make things even easier. As I switched a few times between MQTT and Apache ActiveMQ, I needed to make sure I wasn't rewriting code each time.
So what does this actually translate to? By the time I had finished working on my code it had the ability to detect what MQ broker you were using, what MQ software you had installed locally, what version of the software was present, and determine what commands / topic paths needed to be used for the pub/sub capability. This meant that my shellscript could run on full-fat Linux or an incredibility lightweight platform (such as an embedded ARM board) and would automatically use the correct software. Neat!
So how did I actually use it? In truth, for a few years I used it heavily for a range of tasks:
- Monitoring of IPMI / UPS data
- Monitoring of system metrics (CPU / RAM / HDD)
- Alerting via different methods (email / push / SMS)
- Centralised logging from embedded devices
Truth be told it actually worked really well, and the flexibility of not caring what MQ broker I used made working on different platforms significantly easier. This was also a pivotal piece of programming for me as from that point forward I would always try to make code both flexible and reusable (something which IMHO should be standard practice).
Do I still use it? Sadly not... There have been a few times where I have wanted to revisit this, however in 2015 I had catastrophic storage failure which wiped out not only my primary storage but also my backup storage as well. I lost many things when that happened, including the code I had written for this. While it was a good (yet painful) lesson in why off-site backups are vital, losing the code was frustrating none the less.