Executive summary (yeah that’s right this guide has an executive summary)
An effective penetration test incorporates not just skilled testing but also effective communication and reporting that ensures the results are understood and actioned.
Client communication includes:
- An initial kick-off used not only to be sure the client knows what’s happening, but also to be sure the tester understands what will be tested;
- Ongoing communication that keeps the client informed of what’s happening and any high risk issues;
- A final overview with the client to prepare for the report; and
- A debrief that is tailored to the client and sets the client up for incorporating the recommendations.
It is important to not just communicate effectively with clients but also with colleagues. A consultant should ensure everyone on the test is continuously informed about what is happening and should use colleagues for ongoing support and help.
Reports are the way the majority of stakeholders learn about the penetration testing, and judge the tester, the tester’s work, and Volkis as an organisation. When writing reports the consultant should use templates to save time, but be aware that they can slip in inaccurate or incorrect information.
The executive summary of the report should be aimed at non-technical readers and that can be separated from the rest of the report. Technical writeups should provide a risk assessment that considers the context of the vulnerability, descriptions that allows a skilled reader to duplicate the tester’s work, and recommendations that are accurate and self-contained.
Quality assurance is the safety net that aims to have every report of a consistently high level of quality and accuracy. It is a two-way process with both writer and assurer collaborative together to improve the report. The assurer should be particularly careful with templated text.
Introduction
This guide goes over the penetration testing engagement. It’s about the stuff of a penetration test that is not actual testing.
When someone thinks of a good penetration tester, they may think of someone who has deep technical knowledge and experience and who can uncover the hidden vulnerabilities. It’s extremely important to be a good tester, but that’s only half of the puzzle. The other half is to be a good consultant.
If the client doesn’t know what’s wrong, or how to fix it, then nothing gets fixed. It doesn’t matter how good the test was or how many awesome vulnerabilities were found. If the client doesn’t know and understand what happened, then the outcome is that the test was ultimately worthless.
Being a good consultant means getting the communication, reporting, prioritisation, and recommendations right. They are able to communicate things effectively and structure everything in a way so that the results are actioned.
Client communication
Effective client communication will make the client comfortable with the test, inform the client, and keep everyone on the same page. It will also make your testing better by giving you the information you need during the test.
When interacting with the client you should never present yourself as their adversary. You are here to help their business become more secure, not to point out their failings. This change in mindset will make you more effective at bringing change in the client’s organisation.
Initial kick-off
It is tempting to think of the initial kick-off as being to educate the client. You might think of it as you explaining to the client what you’ll be doing during testing, timeframes, how you work, and when the report will be due. While all of this should form part of the kick-off, if that’s all you’re doing then you’re not getting as much value out of the kick-off as you could be.
When you’re going into a penetration test there’s a lot of learning for you to do. Learning about the systems you’ll be testing, learning about the workflow and data and how it moves, and learning about the organisation itself. In the kick-off you will have the people who already know a lot of the answers you’ll soon be looking for.
So ask questions! For testing of a system (say web app or mobile app testing), some of the topics you could ask about are:
- What does the system do?
- Who uses it?
- What data does the system process?
- What other systems does it interact with?
- What would happen if this system gets hacked?
- How would you go about hacking it?
For systems that require some specific knowledge to understand, such as financial and trading systems, it might be worth asking if you could have a tutorial from a user. Usually that can be arranged and the tutorial tends to be pretty interesting.
If you are testing an organisation (say external or internal penetration testing), some of the topics you could ask about are:
- What are you worried about getting hacked? What are your “crown jewels”?
- What are your business critical systems?
- Where do you feel the weak points are?
The first question tends to give extremely interesting answers. After asking it a few times you may come to realise that your assumptions can be very wrong. Give it a try!
Ongoing communication
The client should be aware of what’s happening during the penetration test.
Our default comms plan is to have contact every couple of days and to call when high risk vulnerabilities are found. This is, though, just the default. It’s best to ask the client how they would like to be informed during the test. A good opportunity for this is during the kick-off.
Similarly, how you alert the client to risks depends on the client, but keep in mind the security implications. Don’t send vulns over email unless you get the client to explicitly say they’re fine with the security risk.
You don’t need to just keep on topic when you’re talking to the client. Feel free to discuss security topics, talk war stories (while avoiding disclosing sensitive information of course), or talk the client through some issues they may face. If you think of something that might help them later on like a tip or fix, let them know! You’re there for a pentest, of course, but a simple tip here and there can greatly increase the value you’re providing.
Ongoing communication isn’t just one-way, so you can still get more information from the client that you might need during the test. If you think of information that would be useful to know, ask the client! They’re happy to tell you. The kick-off isn’t the only time you can ask the client questions.
You might also have potential issues during the testing. This could include availability issues for the system you are testing. Maybe the site has gone down or maybe running slow. Even if you feel confident that your actions aren’t the cause, you should still alert the client and provide any information that could be useful. Quite possibly some crazy side effect or domino effect might be occurring. You might, for instance, have accidentally triggered an intensive process with a memory leak half an hour ago and it’s just hitting now (true story). Even if it isn’t to do with you, often the client appreciates being alerted.
Finishing up testing
The report often takes a day or two to write and another few days to go through QA. This is too much time to get results. You don’t want to make the client wait, and you don’t want to give the client any surprises do you? Remember how nervous you might be getting results for a test yourself in an email or document. The client shouldn’t be having those same nerves.
Instead we can simply give the client a call and give them a rundown of the results. This should be done just as you finish up testing and before you get to reporting in earnest. This will make sure that the client won’t have any surprises when the report comes.
Similarly, it is worth having a call between sending the report and having the debrief. A simple call can make a huge difference on the client side.
Colleague communication
As a consultant you don’t just talk with clients, but you also talk internally with other people within Volkis. They might be project coordinators or other testers assigned to the test.
Communication with other people on the test
Keeping everyone on the same page is often surprisingly hard. You’ll get focussed on what you’re doing and forget to tell anyone else and sometimes you might end up with a bad testing split or clashing with what you’re doing.
Make sure you have a call at the beginning of the test to ensure everyone knows what they’re doing. There should be a logical split of work that considers everyone’s skillsets and availabilities.
You have collaborative tools like Slack. Use them! You can throw up ideas, tell people what’s happening, or just joke around and rant if you’d like.
Sometimes it’s tempting to wait until you’ve got the big reveal that you’ve broken into everything. You should try to resist that temptation. If you like, just think of how good it would be to have someone watch you do it!
Communication with colleagues who can help
You’re not an expert at everything, that’s just a fact. Your colleagues will be highly skilled in areas that you might not be and have occasionally really random experience. This is an opportunity! If you find some esoteric piece of software, some weird language, or some hard exploit to wrangle, throw up a question on slack or give someone a call - there might be someone who can help.
We’re here to support each other and help each other. Everyone has their own skills and experience, including you. Remember you’re not alone here!
Reporting
When there’s someone new to penetration testing, people might make a joke about how much they will hate reporting. That’s not really true - although reporting can be frustrating sometimes, it can also be pretty fun writing up how you owned their system.
What is true is that reporting is important. To understand why reports are so important, you have to understand how they’re used by the client. The executive summary often is sent up to the higher level management. The findings and recommendations are often split and distributed in ITSM systems to the admins and devs tasked with fixing the issues. The full report is often distributed to the stakeholders of the system. The report may also be sent to third parties such as customers and other businesses.
All of these people will never meet you. You won’t have the opportunity to say “wait what I meant there was…” The report is all they have and, for the vast majority of people who become involved in this test, it is what they will be using to form an opinion of you and your work, and of Volkis.
Using templates
Templates, including vulnerability databases, sample writeups, default text, and example text, are a great tool for saving time when reporting. They can help make a half hour job a five minute job and make building an enormous report for a detailed penetration test a feasible exercise.
They are also the main source of low quality reporting. At best they can encourage laziness and at worst they can slip in blatently incorrect information.
At all times you should use the templates as a tool rather than a guide. The use of templates is to save time, not to define what you put in the report. Even the default text in the main templates can be changed
You must also be careful when using templates. The template is obviously not written specifically for your test and it might be making some assumptions or including information that isn’t correct for you. If you use a template you need to read through it and make sure that everything applies to your test.
Nothing in the report template is mandatory. You can change anything, as long as you do what’s best for the client. You can use your judgement when defining what is in the report.
The executive summary
The very first section when you open a report is the most important one. It is the first thing most people read and the most widely distributed part of the report. We will go through the executive summary in detail here, ironically in a way that is longer than your average executive summary.
What’s an executive summary anyway?
Well…I mean…it’s a summary for executives. The main purpose of the executive summary is something to read to get an overview of what’s in the report without having to spend the time required to read the report. It is the same in function to an abstract in scientific documents.
If you’re an executive for a larger company, then there may be a new penetration test that is finished every day. Add all the other audits, reports, and documents that are produced, and if the executive is expected to read them all they’ll spend 48 hours a day reading! The executive summary gets the information across in a consise and simple manner.
The readers of an executive summary are generally not experts (they get the experts to read the whole report). You should assume that the reader is unfamiliar with technical terms and unfamiliar with the systems being tested. If they were in the position to know those things they’ll probably be more likely to read the whole report.
This means that:
- You should avoid technical terms including infosec jargon. If you need to use technical terms to get the point across, always define them.
- You should try to concisely explain the context of the testing and the results. Don’t assume that the reader knows that PayFast is an payment portal that stores customer data because the reader might not know.
As a summary, the executive summary shouldn’t be more than 2 pages. You should also not think of 2 pages as the aim - if you can summarise everything in a few paragraphs that’s good!
The executive summary is not the introduction to the report. Instead, it should be considered separate from the report. This means the it should never refer to the body of the report and the body of the report should never refer to the executive summary. If you say “Refer to page 28” in the executive summary then it’s not an executive summary.
What makes a good executive summary?
After reading the executive summary, the reader should have the same feeling of the security of the system that was tested as you do.
With less experienced (and unfortunately even some more experienced) penetration testers I would read an executive summary and it would have something like this:
The tester identified 4 high risk, 3 medium risk and 2 low risk vulnerabilities. The high risk vulnerabilities included SQL injection and XSS. With these vulnerabilities the tester could gain administrator access over the system.
All of that is factual information, but it’s not a good summary of the testing.
If you had two different systems, one with 5 high risk vulnerabilities and one with 1, which would you say is better? Based just off that information you can say that the number 5 is bigger than 1, but that 1 might be a design flaw that requires total reimplementation of the system. Simple numbers like that don’t really get across what the results were.
Then there’s basic descriptions of the vulnerabilities found. The paragraph uses acronyms and jargon that the reader is unlikely to understand. Even a professional might say “ok there’s XSS, but what can you do with that XSS? Is there anything in the database that you’ve got SQL injection to?”
Administrator access over the system is probably bad, but it also doesn’t say much. What can you do with this administrator access? Why should I care if someone has administrator access?
With that kind of executive summary the client knows you probably have a copy paste that says The tester identified high risk, medium risk and low risk vulnerabilities
. It doesn’t feel like they’re getting more information than what an automated tool could give them.
That executive summary has a lot of factual information that is irrelevant for the reader. What, then, is relevant?
- The context of the testing. What happened? What was tested, what wasn’t? Who are you anyway?
- What should I be worried about with the system?
- What does the system do well? What should I keep doing?
- What should I do as a result of this test?
All this needs to be done in a way that can be understood by a non-technical reader that is unfamiliar with the application. Let’s, then, write an executive summary for the imaginary online banking application PayFast that has vulnerabilities which can be used to gain administrator access over the system.
Let’s first look at the context:
Volkis was engaged by Client to test the PayFast application. This application allows customers to pay their outstanding invoices with Client. It holds personal information, financial information, and information about the services that Client provides customers.
A sentence or two is all you usually need to describe what’s being tested and it provides needed context for the reader to know what it is we’re talking about.
Next you’ll generally describe the testing that was done, but let’s skip to the findings.
The server was found to be fully patched and had only the necessary services exposed to the internet. The application, however, had vulnerabilities that exposes customer information including payment data. Complete control over the application can be obtained using these vulnerabilities, and could be used to perform financial fraud, impact Client operations, and damage the reputation and brand of Client.
Here we briefly go over what is good and they should keep doing. This is not just to make the reader feel less bad, but it provides extremely useful information. Let’s say it’s a CIO reading this. The CIO now knows that a bunch of processes, including their systems configuration guidelines, their DMZ configuration, their firewall configurations, and load balancing are probably working OK (or at least didn’t result in this particular application being insecure). With one sentence we gave incredibly valuable information, didn’t we?
We then briefly describe and cover the impact of the application vulnerabilities in a way that is relevant and understandable to non-technical readers. This has far greater relevance than technical jargon and text that assumes the reader already knows the impact of a potential compromise.
The identified vulnerabilities can be fixed with code and configuration changes. Client could incorporate development guidelines and code sign-off to prevent similar vulnerabilities from occurring in the future. Client should consider implementing a web application firewall and multi-factor authentication.
Although it’s often difficult to say the fixes for what could be a dozen vulnerabilities in the executive summary, you can still provide some awareness of how hard they would be to fix. Do we need code changes or do we need a full rewrite?
You should also touch on the high level recommendations as well in the executive summary - this is what the people who will be in a position to implement those recommnedations will be reading.
High level sections
The higher level sections of a Volkis report include root cause analysis, effective security practices, and additional recommendations.
This is where we look at the wider organisation, not just the systems we’re testing. There are a bunch of security processes that interact and combine to make what ended up being a secure or insecure system. If we don’t raise the root causes then other systems could contain the exact same issues we found, and the organisation might just make the same mistake over and over again.
The same goes for effective security controls. What choices did they make that were good? Mistakes are often hard to stop - they’re not deliberate and probably weren’t intended. The good choices though are often repeatable so we should be encouraging our clients to make those same choices in the future.
The additional recommendations are opportunities to improve security. They aren’t aimed at a specific vulnerability and not having them isn’t a vulnerability per se, but still you think this is what they should do. This could be a new security control, new process, or a recurring action.
It is often difficult for younger penetration testers to know what to put here because they don’t have the exposure to the business processes that combine to result in a secure or insecure system. If in doubt, ask one of the senior consultants - a simple conversation can lead to some key insights.
Technical vulnerability writeups
These writeups often consist of the largest proportion of the report. These writeups are often read by the client contact, but are also quite often split up by the client and given to whoever is in charge of investigating and fixing each vulnerability.
Risk assessment
What is the risks presented by a buffer overflow? What is the impact of a compromise of data? What is the impact of someone getting domain admin? What about TLS vulns?
The answer to all those questions is “it depends”. Domain admin of a domain with no machines or data is not something you’d particularly worry about. A buffer overflow could be point and click or not really exploitable in any useful way. TLS vulnerabilities in an internally hosted informational site might be nothing to worry about. TLS vulnerabilities in a trading application or an app that does large scale transfers might be concerning.
The assignment of risk feels like it should be easy but it tends to be messy and complicated. You need consider of the context of the vulnerability, the system that it is in, and how it interacts with the organisation, its customers, and third parties.
Often you might not have the information to make a good rating, in which case you can ask the client for more info. You should never be directed by the client, though, and should make the appropriate choice based on the information you have.
When writing the risk assessment in the report, there’s a simple rule you should follow: the risk assessment must justify the likelihood and impact ratings you have chosen. You don’t need to go into a tremendous amount of detail, just enough to convince the reader why you made those decisions.
Description
The description needs to say what the vulnerability is and why it is a vulnerability.
You don’t necessarily have to do a procedure, but the description should have enough information that someone who is skilled can replicate your work, identify and exploit the vulnerability. You should show how it was identified, what you did to exploit it, and what the outcome was.
A screenshot is worth a thousand words, so get out your snipping tool and show it visually.
Recommendations
The recommendations are what would be used to fix the vulnerability. A few guidelines:
- You shouldn’t simply provide a URL and say “The vendor has a fix here”. No treasure hunts in the vulnerability writeup. It’s annoying and if you happen to get a printout of the vulnerability instead of a digital copy it can be hard to follow. If there is a vendor guide, you should reference the vendor guide in a quotation markdown and provide the URL afterwards.
- Make sure your recommendations are relevant. It’s super common for testers to copy paste the fix for a Windows system when it’s a Linux system being tested. You should know better.
- Be accurate and specific, but note where you’re making assumptions. If you don’t know what is causing the specific vulnerability it’s still ok to provide general advice.
Quality assurance
QA is our safety net for making sure reports are of high quality and high accuracy. It can be a nervous and frustrating time for both the writer and the assurer, but we have to take it seriously and try and help each other make the best reports we can make.
The process isn’t one-way but should be collaborative. The assurer can ask the writer for more information or what they meant at a particular spot. The writer can ask the assurer for more feedback on different spots or if there’s a particular bit that could have been improved.
Some of the issues raised by the assurer might seem to be small, not worth thinking about. You might think “but the reader will know what I mean”. Technically incorrect, though, is still incorrect. Remember you’re not going to be able to say “you know what I meant!” to the reader.
When you can recognise that some templates have been used you might be tempted to skip over the text, but you should be aware of the potential for incorrect information to slip through. If you are curious as to what changes have been made, you can always ask the writer!
Writing a report with no findings
You’ve done your testing and it turns out the system is fine. What do you put in the report?
This tends to be one of the hardest reports to write. You need to justify what you did to find that there’s nothing wrong. Some ideas include:
- What does the system do well?
- What good choices did they make that they should make again?
- What happened when you tried particular exploits?
- What are some of the activities that you performed?