How To Map A Web Application Like A Pro

Before jumping into any battle, you should know the enemy. For a pentester, mapping an application gives you the knowledge to successfully take on an application and find its weaknesses. In this post, I go into the details of how to map an application and more importantly, how to use this information to be more effective in finding vulnerabilities and in general, be more awesome as a pentester.

Unlike modern day attackers who have almost unlimited time and resources to hack a web application, a pentester has a limited amount of time to perform their job. At the same time, the number of potential vulnerabilities and tests to perform are continually increasing. To be successful as a pentester, you have to be smart about what you chose to test. There is just not enough time during a standard pentest to test for every possible vulnerability. This is where mapping an application is essential. Mapping an application is the process of gathering as much information about the application as possible in order to identify areas where the application may be vulnerable. By gathering details about the application’s content, functionality, technology, and security, you can begin to assess the application’s attack surface and more effectively choose which security tests to spend the most time on.

Initial Information Gathering

Unlike actual testing where you are actively attacking the application, most of the work you do during application mapping is passive. It starts with gathering as much initial information about the application from existing sources. In a previous post I spoke about scoping an application and performing an application walkthrough. This is where the application team introduces you to the application including giving you a demo and providing background documentation. If you haven’t gone through this process, you’d be amazed about how much insight into the application and potential weaknesses you can discover prior to testing just by sitting down with the application team. Use the meeting to ask how security was implemented, for example how they validate uploaded files or build their database queries. You can walk away from this meeting with a much better picture of the attack surface and where you should focus your efforts. In addition, many teams will provide you will full access to their project portal where you can review architecture designs, network diagrams, business requirements, and security requirements prior to testing. This will further complement your view of possible security weaknesses.

Another important step during initial information gathering is to review prior reports. Instead of starting a pentest with a blank slate, leverage the work of prior testers in order to jumpstart your testing. The previous tester may have spent hours figuring out how to exploit a particular vulnerability, so why spend time knocking your head against the wall doing the same. You can quickly confirm whether the finding is still open by applying the testing steps provided in the previous report. Then you can add value to the specific finding by adding additional examples and then move on to test for other vulnerabilities.


Once the initial information gathering is complete, it is time to start interacting with the application as a user and building up a complete map of its structure. This can be accomplished by manually walking through the application with Burp Suite and leveraging one of its tools known as a spider. Spidering is an automatic process where the tool requests a page, parses it for existing links, retrieves those pages, and continues the process recursively until all links have been visited. As forms are encountered, the spider attempts to submit them using sample data. Spiders are an amazing tool that has made the lives of pentesters much easier. However, with the complexities of today’s applications, spiders sometimes have difficulty fully navigating through an application. For example, a spider may face data validation issues when submitting forms that require a certain type of data or struggle when trying to complete multi-step workflows. In other cases a spider may get stuck spinning on a URL that is continually regenerated with random data or end up being logged out for not providing a special token with each request.

As a result of the difficulties of fully spidering a web application automatically, it is important to manually walk through the entire application yourself. Consider yourself a partner with the spider. The two of you have the job of providing Burp with a complete and detailed picture of the entire application. Your goal is to get Burp as much information as possible so that it can build up a full site map to be used during testing. Navigate as much as the application as you can reach, including submitting forms and triggering functions, and then when you are done, set the spider loose to identify any items that you might have missed. Just be sure to exclude any dangerous pages or functions that you discover during your walkthrough or else the spider may wreak havoc on the application.

Uncovering Additional Content

Once you have spidered the application and have a full site map stored within Burp, it is time to enhance the information with additional data. You want to uncover parts of the application that may be hidden from spidering due to the fact that there aren’t any direct links to the pages. This may include backup files, temporary files, configuration files, test files, or default content. Many third party components used to support the application may contain additional content such as sample files, management interfaces, and content management systems. There are a number of ways to discover this hidden content with one of the best being a brute-force tool such as Dirbuster or Burp’s content discovery tool. These tools come preloaded with a large set of common file and directory names that can be enhanced with platform specific words lists as needed. Just remember to fire these off early in the process because some of these lists can take a long time to process.

While the discovery tool is running, there other manual steps you can take to reveal hidden content. Google comes in handy here. For instance, you can put “cache:” in front of the site address to find pages that have changed or removed. If the application is a third party application, sometimes you can find a vendor manual online which might reveal additional directories and files. Or the vendor may have a Github site allowing you review the application source code and user comments in order to find additional features or functionality.

Sometimes it may be required to guess or deduce content from what currently exists. If files and directories have a standard naming structure such as /files/employees/paychecks/01012018, you may be able to find other files by guessing additional dates. The same applies for functions. You may have a single URL that submits a parameter for the function such as AddUser. It would be worth a shot to try other related functions such as EditUser and DeleteUser.

Additional content may be running on other ports. Very often third party products will have several components running on different ports. You might find an admin panel, a content management system, documentation or other functionality running on a port different the web application. Run an Nmap scan to identify services running on the same server and investigate them to see if they are related to the application you are testing. They may provide you with a wealth of information.


Now that you have a site map in Burp and have enhanced it with additional information, use this data to start fingerprinting the application. Fingerprinting is the process of identifying the specific components that make up the application and its infrastructure including client side frameworks, server side frameworks, web servers, application servers, database servers, web application firewalls, etc. By knowing details such as the technology in use down to specific versions of individual components, you can research existing vulnerabilities using databases such as the National Vulnerability Database or identify end-of-life versions on vendor sites that are no longer supported. This will help you as you narrow down the attack surface and identify the specific types of tests you’ll be performing.

An obvious place to start is with the web server. Sometimes you’ll get lucky and the web server and specific version will be provided in the response header:

If this has been modified or removed, there are a number of tools that can fingerprint the web server based on unique characteristics based on how it implements the HTTP protocol such as ordering of headers or responses to invalid data. Some tools to use include:

In addition to the web server, try to identify the client and server side frameworks that support the application. Some times these can be identified by looking at file extensions (server side) or reviewing code and HTTP responses (client side). For instance, if you see a JSESSIONID cookie, you can be sure you are dealing with Java on the backend. Or if the application requests files with .aspx extension, the backend is a .NET platform. If you are not sure of a unique looking cookie, file extension, or directory, google it as the chances are that it has been used before in another application.

Remember, identifying the technology or product is only the start. To identify vulnerabilities, you have to track down versions numbers, which sometimes may be hard to find. With client-side frameworks, an obvious place to look for version numbers is inside of source files that are included by the application. For server-side technology, error messages are often a source of leaked versioning information. Sort Burp proxy by status code to find 500 server errors and review the response. I like to use an extension called Logger++ which captures all requests and responses issued by Burp. This will let you see activity going on in other parts of Burp where traffic is not visible such as active scanning. This is normally a good source of errors that may reveal version information. In other cases, additional content you find like sample files, admin panels, content management systems can be helpful to uncover versions. There are also some tools that perform this activity for you. One such tool that can help with uncovering versions is wig (WebApp Information Gatherer). This is a Python tool that identifies a website content by comparing hashes of files and extracting version numbers from known files.

Once you have identified technologies and versions, you can use this information to search for vulnerabilities. The site CVE Details allows searches by a number of parameters including vendor and product version. In the example below, a search was conducted for IIS 6.0.

The first result looks interesting. It is a remote code execution vulnerability identified in 2017.  Once you have a CVE classified vulnerability, you can research whether there are active exploits for it. In the next example, a search of Exploit Database reveals two active exploits for the CVE, including one for Metasploit.

Before executing an active exploit such as this against the web server, be sure that it is in your testing scope. Such an attack may interfere with the operation of the web server, may set off alarms within the organization, and may leave behind code that others attackers can use. If you are unsure, make sure you talk with the client and receive permission before conducting an attack. At a minimum, you will want to disclose the CVE and available exploit in the report so that the client can understand the seriousness of the issue.

Identifying Entry Points

Entry points are all the locations within a web application where data is submitted to the web server. These are the primary places where the web application may be vulnerable to attack and they should be the focus of your testing. Entry points can be in any number of places within an HTTP request including in query strings, post parameters, rest parameters, and in headers such as Cookie, User-Agent, and Referrer. To uncover entry points, you need to interact with the application. The perfect time to do this is when you do your manual walkthrough of the application. As you walk through the application, Burp will be recording all the HTTP requests and associated parameters in its proxy history. To see the parameters associated with a request, click on a specific HTTP request in Burp Proxy and then click on the “Params” tab under the Request tab. Parameters are organized by URL, Cookie, and Body. If you want to get a full list of parameters that Burp has identified across the application, use Burp Target, right click on a domain and chose Analyze Target. Under the Parameters tab you will see a full list.

You may want to keep track of all parameters in a spreadsheet so you can track your testing to ensure you provide coverage across all parameters. It is easy enough to copy and paste the parameters from the Params tab into a spreadsheet for each request and then check them off as you test each one and add notes if you find something interesting. Sort by the parameter to remove any duplicates.

At a minimum, take notes on any interesting parameters you see. One way to do this is to highlight requests with colors in Burp and then add a comment. To do this right click on a request in Burp Proxy and choose highlight. Choose colors to represent a scheme such as red for further investigation and green for no issues. To add comments, right click on the request and choose Add Comments. Burp will add your comments in the Comments column for each request in Burp Proxy.

Identify Existing Security Measures

Another important part of mapping an application is identifying existing security measures that are already in place. Having knowledge about what security is built into the application will help you better decide which tests to conduct and how much time to invest in any one test. For example, if you see CSRF tokens being issued, then you can quickly check to make sure they are working as expected and then cross CSRF testing off your list without spending a lot of time on the issue. Or if you see that output encoding is being used for reflected user input, you can spend less time on XSS testing and more on other parts of your test. As you walk through the application, look for common signs of defense measures across categories of vulnerabilities such as authentication, authorization, sessions management, configuration management, etc. Keep a list of what you find and then use that to influence how you approach testing of the application.

Understanding how the application implements security also gives you a general sense of how vulnerable an application might be. If you see examples of a failure to implement the most basic security practices such as complex passwords or server-side validation, then you are probably dealing with an application that is going to have multiple vulnerabilities and a long final report. In this case you’ll want to spread your testing far and wide to cover as much as possible.



Mapping The Attack Surface

The final step of mapping the application is mapping the attack surface. At this point you should have a wealth of knowledge about the application. You have walked through the application, built up a map of pages in Burp, catalogued entry points of data, identified potentially vulnerable components, and recognized existing security mechanisms that are in place. Now the goal is to take all of that information and use it to map the attack surface. The attack surface includes all the areas of the application that may be vulnerable to attack. Attack surface can be determined by looking at existing functionality in the application, types of parameters, vulnerable components, or security mechanisms and making a judgement about whether the possibility of certain types of vulnerabilities exist in that part of the application.

This is important because it helps you prioritize your testing in order to spend more time on the tests that are most likely to uncover vulnerabilities. For example, if you see an SQL keyword in a parameter and a search function that returns data from a database, it makes sense to spend a sufficient amount of time testing for SQL injection. However, if most pages in an application are static and there is a robust web application firewall standing guard in front of the application returning SQL injection specific warnings, then testing for SQL injection would be a waste of time.

Here are some examples of attack surfaces that should guide your testing:

  • File upload functionality – test for malicious file upload
  • Database – test for SQL injection
  • Reflected data – test for XSS
  • Lack of unique tokens – test for CSRF
  • Lack of session token update – test for session fixation
  • Authentication functionality – test for authentication issues
  • Multiple roles – test for authorization issues
  • Local storage – test for sensitive data
  • Predictable naming patterns – test for insecure direct object references
  • API calls – test for CORS issues
  • LDAP integration – test for LDAP injection
  • XML use – test for XXE, XML and Xpath
  • URLs submitted to the server – Server Side Request Forgery
  • Workflows and business logic – parameter and business logic tampering
  • End of life software – test for vulnerable third party components

Mapping an application is an essential part of web application security testing. You may be tempted to jump immediately into testing at the start of a web application security test, because of course, finding vulnerabilities is the exciting and sexy part of being a tester. But if you take your time and invest in understanding and mapping the application properly, the chances are you’ll end up with more findings in your report and help your client be more secure in the long run.

Happy application mapping!

Check out the other articles in the application security testing series at

How To Scope A Web Application Security Test
How To Write An Application Security Finding
How To Create An Awesome Application Security Report