An insight into my role as a SISO - part one

My job can vary hugely day to day, as a lot of what I do is reactive.  Sometimes I get asked what I do, so I've tried to pick a number of activities from my role and collated them into this post.

What's a SISO?

SISO stands for Senior Information Security Officer, is pronounced "sigh so", and essentially means that I'm the senior point of contact for anything security related for my organisation.  What tasks are carried out by the SISO at an organisation varies by employer, so other SISOs may do slightly different things.

In my case, I work with staff across the business for anything security related (more on that below).  I also work with other managers and directors to set the security direction for our organisation.  It's important to remember that the security team is there to support the security needs of the business, ensuring the business can achieve its goals in a secure way.  While the SISO sometimes has the power to say "no", we really should be focused on "no, not like that, we should do it this way to be safe".

Remember that contrary to popular belief, "security is not just an IT problem".  In my organisation, the SISO reports to the compliance and operations director, but it's not uncommon for a SISO to report to a Chief Information Officer (CIO), Chief Information Security Officer (CISO, pronounced "see so"), or, perhaps confusingly to the head of IT.  Reporting to the head of IT can be a legacy of when information security was considered an IT problem so the security teams got put with IT [1].

Giving advice

As a subject matter expert I get a number of queries - "is it safe to click this link?", "what does asymmetric encryption mean, and do we use it?", "is this a malicious email?" - and part of my job is to give advice to colleagues.  I've given some of the less complex examples here, as I have to give advice in other areas too and I mention that later.

Giving advice is a really important part of my role, particularly because it means people are thinking about security and how they can help keep the organisation safe.  I try to respond to the more basic advice queries quickly, as I feel it's important for a security team to be approachable.  That's not always possible though if I'm right in the middle of something big (like incident response).

Vulnerability management

When I arrived at my current company I was keen to implement a vulnerability management programme that included regular vulnerability scanning.  This wasn't being done, and short of manually inspecting and scanning every server, switch, and device I was lacking key information I needed to help keep the company safe.  I implemented Rapid7 InsightVM, which I'd used before, and began to scan the environment.  This isn't the only tool I use for vulnerability management, but it is a key component of my tool set.

Running scans isn't enough though, it's important to review the scans and determine the correct course of action.  Sometimes this means looking at a result and accepting or allowing it (e.g. "use of self-signed certificate" [2], which isn't always a concern), but other times can involve arranging work to correct a problem.  Fixes ("remediation") can be as simple as installing an update (e.g. monthly Windows Updates) or as complex as changing configuration.  Worst case scenario, the best option is to remove the vulnerability and risk entirely by decommissioning the system with the problem.

Vulnerabilities have an associated risk level, and it would be easy to just raise remediation tickets for every vulnerability that's found.  Doing this would mean a lot of unhappy colleagues, and a lack of prioritised work.  Instead I evaluate what vulnerabilities are found, determine what's most important in our environment, and log tickets in batches.  That way we see a meaningful improvement.

If you want to hear more about vulnerability management, I gave a talk at codeHarbour (video).

Validating security reports

I'm a firm believer that an organisation should accept a security report from anyone, especially if the person giving the report is acting in good faith.  It's not uncommon for someone to find a problem and then report it immediately, without any financial or malicious motivation, and we should make reporting easy.

Once you've received a report it's necessary to validate it to make sure it's actually a problem - no-one wants to waste hours looking into a non-issue.  Validation can be as simple as repeating what the reporter did, and observing the results, and a good report should make it easy for validation to take place.

Once the issue has been validated, remediation tickets can be logged with the relevant team, advice given, and progress tracked.  It's nice to respond to the person that reported the issue and let them know what's happening - follow company policies when doing so.

Penetration tests

I'm trained to conduct penetration tests, also known as ethical hacking, for both infrastructure (servers, switches, firewalls, etc.) and web applications, although usually I'm only using these skills to validate a report or perform some basic checks of our systems.  Instead, I create the scope of work for tests of our applications and then liaise with qualified third parties to arrange tests.  By having a test conducted by a third party we're able to offer an independent view to our customers.

While the test is underway I'm in regular contact with the test team to ensure they've managed to access the system, and check how they're getting on.  Generally, any critical or high severity issues get reported before the test completes, and remediation tickets can be logged so these issues can be addressed quickly.

Once the test is over I review the report, log relevant tickets for remediation, and add a recommendation to each if I can provide more organisation specific guidance than the tester's report.  If I know something isn't a problem (e.g. "MFA is missing" being in the report, when actually I know the team just ran out of configuration time) I won't log that ticket - there's no point burdening the team with work they don't need.  As with all vulnerabilities, I then track the ticket to completion.

Our customers like to see the executive summary from the penetration test, so these get provided along with a comment from me as the SISO.  Comments can range from "this isn't an issue because of X" to "all these issues have been resolved as of this date".  It's important to provide this extra context, otherwise you'll only be asked for more details anyway once security questionnaires are submitted.

Security questionnaires

I've written about security questionnaires before, and how they form part of a customer's due diligence process.  Currently the team consists of just me, so I spend a lot of time on these questionnaires, often at least four hours for each one.  I've been building a question & answer bank though, to help speed up the process of answering and hopefully to eventually allow colleagues to self-serve a lot of the answers.

Next time...

There's a lot to my role, and discussing it all in one post would be too long a read.  In part two I'll be discussing incident response and feature security reviews.


Banner image: "Computer Programmer - colour", from OpenClipart.org, by .

[1] On the subject of legacies, IT often reported to the chief financial officer (CFO), or similar, because when computers first came in they were grouped with the counting machines the finance teams already had.

[2] Certificates in this case are used to set up secure connections between computers.  The certificate contains a cryptographic key, and is often issued by a certificate authority, something that (theoretically) does some due diligence that "the person / thing I'm giving this certificate to has the right to have this certificate".  A self-signed certificate is a certificate that is issued by the same system named on it - essentially the system gave the certificate to itself.