Global warming is real, and my wallet isn't getting any heavier. These two reasons alone should be enough for anyone to examine their electricity usage and do something about it. In my case, spending over £100 per month on electricity isn't something I am a fan of, and the ice-caps melting doesn't look great for retirement. With this in mind I started examining how I could reduce my overall electricity usage and overall carbon footprint (more on this in a later article). That said, there need to be some rules for the game:
- Reduce my overall electricity consumption
- Don't reduce the overall capability/functionality that I have
- (bonus) reduce the overall noise within the house
Trying to monitor the power usage of a property in a meaningful way isn't an easy feat, especially given how a "smart" meter only assess the total household usage. To solve this I had a Brultech GEM fitted shortly after moving into the property, with it wired to monitor each circuit and the overall usage. Some configuration later, the device was sending usage data every few seconds, making it significantly easier to track where the power is being used.
Remote Control Plugs
With better monitoring in place it becomes significantly easier to isolate what is consuming power, especially devices that you didn't expect. One improvement I found was to use smart plugs and to have them configured for specific times of day. A good example of this is how my study (with too many screens and lots of IT equipment) would run 24/7 and consume a significant amount of power. With a smart plug fitted, everything is only powered when I need it to be which helps save electricity. The same goes for other aspects of the house, whereby devices running in standby are consuming way more than they should.
Another heavy source of electricity usage is the CCTV system, specifically the cameras around the property. While work was previously performed to reduce the power consumption of the DVR itself, the cameras were left untouched for some time. During an upgrade (switching from infrared to colour-at-night) the power usage dropped noticeably for the overnight period, which came as a result to the IR lights not being present/in-use. The newer cameras not only had the benefit of being able to see colour in the dark, but also provided a nice power-saving at the same time.
It's important to note that even with cameras without infrared, PoE budgets do add up and when you have a significant number of cameras it really adds up. With my current camera configuration I am seeing close to 80 watts of continuous power draw (more at night), even with the onboard camera processing for motion events disabled. As they also feature H265 hardware encoders it shouldn't be a CPU issue either sadly, so there is very little that can be done around this.
Most people within the IT profession typically have some form of server running that they experiment with (or break when they are bored). In my case, three servers running multiple virtual machines for different purposes. With some tweaking of RAM/storage distribution (and some duct tape) this number was dropped to one, with a significant power-saving as a result. The trade-off (as there has to be one) is that the CPU usage of the remaining server is now higher and so its possible during peak periods the system may be running at 100% load, however in practice that doesn't happen often and shouldn't be an issue.
What would a house be without a large amount of network devices consuming power... The answer, not my house. While most devices are now turned off when not required and are cabled in rather than using WiFi, the core infrastructure actually had room for improvement.
An easy starting point was the disabling of unused access points and the consolidation indoors. From an outdoors perspective, access points that would cater for when guests are around aren't required at time of writing (thanks COVID), and so can be disabled easily as they are powered via PoE. While this is a small saving overall, it all adds up. Indoors, the two access points used for upstairs/downstairs were merged into one (after some speed/signal testing). The reduction in throughput is barely noticeable and the range hasn't been impacted.
Examining the core network switch was an interesting challenge, as previously it had been changed to a Unifi 48-port PoE model to handle the additional port requirements while being able to deliver power directly. Unfortunately the power efficiency when compared to the non-POE model it replaced does not make for good reading, despite it being a newer model. As the number of servers had been reduced thus the port requirements were also reduced, the previous non-POE switch was fitted again. Surprisingly this did reduce the overall power load of the system, despite all POE ports being disabled at one point and the 10Gb connectivity removed.
Finally, the firewall (specifically) was left to address. This device is the elephant in the room, as while the 24-port non-POE and 48-port POE switches can run with their fans off for the most part, the firewall always has its fans on and creates significant noise (even when the fans are switched to their Noctua counterparts, thanks to a bad cooling design). While the firewall can't be switched out yet (as its replacement is still in Early Access), the fans could at least be removed to quieten the device (as the power saving would be fractional). While this is possible, it only works in a well-ventilated area and with the device cover removed. As you can see from the pictures below, the device wasn't safe with the covers on and no fans running. Thankfully, with 2u of space left empty directly above and the cover removed, the device is running at the same temperature it did previously (with the fans running) and without raising errors anymore. On a related note, Ubiquiti really need to address the power-usage of their devices, as a firewall at 1% load should not be generating that much heat!
A recent upgrade to the primary server was the addition of a 10Gb NIC and subsequent SFP's to the core switch. While this proved beneficial for some edge-cases, the majority of the time the overall network throughput never came close to maxing out a standard 1Gb/s link. Also, the heat generated from the 10Gb NIC is substantial, highlighting just how much power it takes to push data at this speed. One removal later (switching back to the multiple 1Gb/s ports the server has built-in) and the power consumption is further reduced (not to mention the heat generated).
One of the benefits of server technology is that for the most part it will run almost indefinitely. My Xeon-based Supermicro server is still running without issue, despite it starting to show its age. The cost of replacing the server would be in the many thousands, and the current performance meets my requirements, making reducing the overall power consumption a tricky requiement. That said, one of the areas that can be examined is the processors within the system and if they could be switched out to more efficient versions at a cost that makes financial sense.
Cue the cheap v2 Xeon's now available on eBay thanks to many hardware refreshes within large organizations. While my 2697v2 Xeon's are great for performance, their overall power-efficiency sadly isn't (even when running idle). Thankfully, a similar model Xeon was released that has a significantly lower TDP while keeping a similar number of cores (10 vs 12). Given the cost of these CPU's is now very cheap, replacing them to drop the TDP and lower the idle usage makes sense. Once these arrive the server will be modified and the overall power consumption compared.
With the CPU's changed over to the lower TDP versions there has been a drop in power usage, however not as much as hoped. While the graph below does show around a 30 watt reduction in consumption (and a steadier usage curve), each processor is running at turbo for longer due to the workload. The impact of this (and the rationale behind it) dates back to a discussion around using a slower CPU at full speed for longer or using a faster CPU that can return to idle much quicker. In this instance, there is still a power reduction so it has a positive impact, but the constant turbo usage does cap the reduction somewhat.
From a mid-day perspective the total usage has dropped by nearly 200 watts (which adds up over the space of a year), while the consumption at night has further dropped thanks to the efficiency of the new cameras. The CPU change has also helped, just not as much as planned. Replacing the firewall with a more efficient model is likely the next step (given how much heat it generates/power it wastes).