ISC2 (formerly (ISC)²) ran an online only "spotlight" (conference) on secure software development this week on the 8th and 9th of November. As this directly aligned to my work's industry (I work for a software development company), I took the time to attend this. ISC2 members were able to attend for free, and the agenda was fixed so I didn't need to pick sessions.
Day one sessions, covered here:
- Open Source Software: The Good, The Bad, The Ugly
- Secure Software Updates - an Introduction
- Deep Dive into SBOM
Open Source Software: The Good, The Bad, The Ugly
This was a panel session discussing some of the implications of the use of open source software, both generally and as a component in your own products. One of the first things highlighted by the panel was that licenses can be difficult to track - not paid licenses (and not buying enough), but doing something with the software that the license prohibits. There's a number of open source licenses available, so it's important to make sure you know what license applies to the software and that you obey it.
When it comes to including open source software in your own application it can be hard to manage the attack surface. An example was given of a developer pulling in a component, only to find this brought in numerous dependencies that each introduced their own risks. Managing this can be difficult, particularly as packages become outdated or abandoned by their authors. The panel suggested one option was to only allow including packages (and versions) pulled from your organisation's private package registry. That way, the security team could review and approve a package and have an awareness of the risks it introduces. While this isn't a bad suggestion, for smaller software companies I can see this being almost impossible to implement.
Once a package becomes unmaintained it could be taken over by a malicious group. If that group then releases a new version then this code would be included by anyone who uses the package, quickly impacting lots of applications. This can be avoided using a private package registry.
Security teams came under the spotlight as much as developers, as the panel asked why security teams weren't actively reviewing the top ten packages that are in use by the industry . It was suggested that teams should be submitting security fixes regularly to help everyone.
Businesses were encouraged to have a process for reviewing a package and helping developers understand what a bad package looks like (e.g. unmaintained, uses outdated technologies, forces use of weak crypto). Businesses were also reminded that the loss (or compromise) of a critical package used in their products would have a huge impact, so this risk needed to be considered. Meanwhile, security teams were encouraged to state what they test against when reviewing packages, so the criteria were clear and the process transparent.
Secure Software Updates - an Introduction
Our speaker defined the challenge of ensuring updates are issued in a timely fashion, in a verifiable way, and went on to describe how security needs to be present across all areas of the development and deployment process. He identified a number of areas where security needed to be present in order to reduce risk to the business and their customers.
From a customer's perspective, enabling automatic updates may not be the best option (particularly in highly regulated industries or life and death situations). Unfortunately, turning off automatic updates then presents a challenge for the development company - how do you make sure customers know there is an update and then apply it? Maintaining mailing lists quickly becomes impractical and outdated, or perhaps the notification was never configured to go to the right person in the first place. I don't recall a definite solution being offered, and I suspect that's because this will vary from company to"ISC2 Spotlight" written in white on a green background, with "secure software development" in blue on a white background. company.
Staying with the customer for a moment, we were reminded that when testing an update (be that a security fix or a feature update) it's important to test for feature changes / fixes and unexpected behaviours. I suspect, as always, resource challenges will mean that not all customers test all updates and that very few read the release notes, so this will be particularly difficult.
For development companies, when developing an update (or software generally) t's necessary to add integrity and authenticity checks at every step, including at:
- Code repo - only relevant people should have access to the repository
- Build / packaging
- QA / test - tests have to be valid, and the results correctly processed
- Release - ensure only intended and tested code makes it into the release packageBanner image: Banner imagery from the conference pages.
- Customer update acquisition - make sure the distribution point (e.g. download server) is hardened, so attackers have less chance to compromise it
- Customer deployment - have the installer check for package integrity (e.g. confirm the cryptographic signature) before installation commences
We were reminded not to include shared secrets in our software, as once the software is with the customer (or an attacker) it can be reverse engineered or decompiled to gain access to the secret. This in turn could lead to a wider compromise of multiple customers. Configuration management in your software, and by your customers, is also important to ensure changes are only made in an authorised and managed fashion.
For software authors it was suggested that we're transparent with our customers about what we do to protect our software throughout the process, including how developer workstations are protected and what assurances we give in relation to updates. We should also be asking the same questions of our suppliers.
To end, an important point raised by the speaker, that I've paraphrased:
Remember that just because a patch is signed, unless you trust the signer the signature is worthless.
Deep Dive into SBOM
An SBOM, or Software Bill Of Materials, details all the software components in a system (for example, third party libraries). This "deep dive" was a panel session that was supposed to discuss SBOMs and their benefits in more detail, but sadly I felt the panel missed the brief so I didn't take too much away from this session. Partly this was due to the panel focusing on American regulations around SBOMs and the rules surrounding SBOMs in medical software and devices.
Useful take aways were that being transparent about what is in your software can help your customers see that you do a thorough job and know your systems. It was also commented that if you don't document components before releasing your software you'll likely never know what was in there. By having this information you can identify problems quickly in the event of of vulnerability being discovered.
The panel also commented that security improvements were often overidden by the business, as the business prioritises new features. Something I'm sure every security professional has experienced at least once!
My post on day two is here.
Banner image: Banner imagery from the conference pages.
 In my case, it's because the security team doesn't have the resources to do that!!