There is a risk to patching and there is best practice for reducing the risks of patching
Well that may be the question but the answer simple, Patch.
But let me clarify before you run off to install countless patches. I think we can all agree that software, whether it is a desktop, a server, or an application, is never perfect; despite the programmers’ claims. This means that patches have to be developed and released to resolve bugs, security gaps and things of the like, it’s just inevitable. So what are companies suppose to do with these endless patch releases? Just rush out and install them as quickly as they can?
No! Patches, while intended only for good, can have a negative consequence when installed. It is not unheard of to install a patch that will correct one issue only to have it create a new issue. That’s why so many people question the validity of patching. “Why patch something that I don’t use if it might break something that I am using?”
Are you sufficiently confused yet? Just in case your not, let’s recap: software needs patching to resolve issues and patching runs the risk of creating new issues but it’s important to patch. Somewhat of a cyclical argument, don’t you think?
It’s a classic catch-22. And there are a lot of companies out there that take the “if it ain’t broke, don’t fix it” approach to patching and refuse to install patches just for the sake of installing them as long as their systems are running fine or they only install patches that affect systems they know to be in use. This is problematic for three reasons:
- By not addressing the issues resolved in the patches you could be potentially putting your data at risk. And let’s face it, data security is crucial process for any business that wants to protect its employees and/or customers.
- As companies grow and change over the course of doing business how do they know that at some future date they won’t utilize the areas impacted by the patches that aren’t installed? If that day comes and the system is out of date there could be major headaches to resolve the issue then instead of now which could cost more money in time, labor, delays, etc.
- If your systems are not at a current patch level and support is needed from a vendor it might be refused or come with a caveat. Vendors often times require current patching levels prior to offering support or won’t guarantee their results without current patching levels. If a company is operating outside of the vendor’s recommended state then the vendor may not be responsible for assisting until compliance is obtained. Think of it as a warranty and if the system isn’t patched then the warranty is void.
So how does a company keep their environments up to date with patches without compromising the integrity? Quiet simply they maintain multiple environments. It is highly recommended that companies have at least two, preferably three environments.
Now let’s briefly explore these three environments. Production is obviously the actual environment that serves the company and therefore is the more important environment. Stage is a production replica, down to the last DLL and NIC card. This is where new code, servers, technologies, and PATCHES are installed and rigorously tested, including full regression testing, before going into production. This gives companies a chance to test things out in a real world simulation to find and resolve any issues before introducing them into the main environment. Some bugs may go unnoticed but if you are diligent in your testing you can reduce the risk of introducing any bugs that would interfere with current processes. And then we have the Development environment. As it’s name suggests this is where new ideas are toyed with. Patches, configurations, installations, etc. are all put in here to mature and when when ready placed in the staging environment for validation before moving into production.
But what about the cost? Maintaining three environments has to cost a fortune, right? Wrong. While there is some additional cost to maintaining a production level replica with the stage environment the level of security and time savings from avoiding outages caused by untested production changes easily makes up for the cost. As for development, since it doesn’t have to be an exact replica of production it can be virtualized at a fraction of the cost of a true environment to simulate the processes and systems in production. And there is an added bonus to virtualization; snapshots! Most virtual system programs give users the ability to take snapshots of the environment and roll back at the touch of a button if things go wrong. If a patch crashes an application or renders something unusable then simply roll the environment back to its previous snapshot to restore functionality and then you are free to work the issue for a resolution, all without compromising your production environment.
So in the end we know that there is a need to patch, there is a risk to patching, and that there is a best practice for reducing the risks of patching. The next time someone asks you if something should be patched you can say, “Yes, but only after it has been tested in our other environments first.”