Endpoint Discovery
Identifies active devices and services in a web application, helping map the environment and find potential targets.
dirsearch
Repo: https://github.com/maurosoria/dirsearch
Installation & Usage
git clone https://github.com/maurosoria/dirsearch.git
cd dirsearch
python3 -m venv myenv; source myenv/bin/activate
pip3 install -r requirements.txt
python3 dirsearch.py -u <URL> -e <EXTENSIONS>
Prerequisites
apt install seclists dirsearch
Usage
Combined command
domain=domain.com; dirsearch -u https://"${domain}" -r -o ./"${domain}".dirsearch -w /usr/share/seclists/Discovery/Web-Content/directory-list-lowercase-2.3-big.txt,/usr/share/seclists/Discovery/Web-Content/common.txt,</PATH/DICTIONARY.txt> -x 403,400,429,500 --full-url --auth-type=bearer --auth=<eyJh...> --proxy 127.0.0.1:8080
domain=domain.com; dirsearch -u https://"${domain}" -r -o ./"${domain}".dirsearch --log="${domain}".dirsearch.log -w /usr/share/seclists/Discovery/Web-Content/directory-list-lowercase-2.3-big.txt,/usr/share/seclists/Discovery/Web-Content/common.txt -x 403,400,429,500 --full-url --cookie="AW..."
Simple
python3 dirsearch.py -u "https://target"
Excluding Extensions
python3 dirsearch.py -e php,html,js -u https://target
Recursive scan
By using the -r | --recursive
argument, dirsearch will automatically brute-force the after of directories that it found.
python3 dirsearch.py -e php,html,js -u https://target -r
You can set the max recursion depth with -R
or --recursion-depth
python3 dirsearch.py -e php,html,js -u https://target -r -R 3

dirb
Repo: http://dirb.sourceforge.net/
Info: DIRB is a Web Content Scanner. It looks for existing (and/or hidden) Web Objects. It basically works by launching a dictionary based attack against a web server and analizing the response.
DIRB comes with a set of preconfigured attack wordlists for easy usage but you can use your custom wordlists. Also DIRB sometimes can be used as a classic CGI scanner, but remember is a content scanner not a vulnerability scanner.
DIRB main purpose is to help in professional web application auditing. Specially in security related testing. It covers some holes not covered by classic web vulnerability scanners. DIRB looks for specific web objects that other generic CGI scanners can't look for. It doesn't search vulnerabilities nor does it look for web contents that can be vulnerables.
Examples
domain=domain.com; dirb https://"${domain}" /dir/wordlist.txt -w -o "${domain}".dirb -l -i
dirb https://domain.com /dir/wordlist.txt -w -o output.txt -l -i
# Ignore 403 error code
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/big.txt -w -o "${domain}".dirb -l -i -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" -N 403
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/merged.txt -w -o "${domain}".dirb -l -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0"
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/big.txt -w -o "${domain}".dirb -l -i -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
# With Cookies
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/big.txt -w -o "${domain}".dirb -l -i -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" -c "name1=value1; name2=value2" -N 403
wfuzz
Repo: https://github.com/xmendez/wfuzz
Info: Wfuzz has been created to facilitate the task in web applications assessments and it is based on a simple concept: it replaces any reference to the FUZZ keyword by the value of a given payload.
Examples
# With recursion
wfuzz -c -R 10 -w /usr/share/wfuzz/wordlist/general/megabeast.txt http://www.domain.com/FUZZ
# Discovery content and hide 400, 403 and 404 http error codes
wfuzz -L -A --hc 404,403,400 -X FUZZ --oF domain.com.wfuzz.output -R 10 -w /usr/share/wfuzz/wordlist/general/megabeast.txt -u https://domain.com
Dirhunt
Repository: https://github.com/Nekmo/dirhunt
Info: Find web directories without bruteforce. Dirhunt is a web crawler optimize for search and analyze directories. This tool can find interesting things if the server has the "index of" mode enabled. Dirhunt is also useful if the directory listing is not enabled. It detects directories with false 404 errors, directories where an empty index file has been created to hide things and much more.
Installation
python -m venv venv; source venv/bin/activate
sudo pip3 install dirhunt
Examples
dirhunt http://website.com/
dirhunt http://website.com/ > directories.txt
With credentials and reporting
dirhunt <https://domain.com> --to-file dirhunt.json -h "Cookie: OAUTH2_STATE=eyJyZWRpcmVj..." --progress-enabled
XSStrike
Repository: https://github.com/s0md3v/XSStrike/
Usage: https://github.com/s0md3v/XSStrike/wiki/Usage
Installation
git clone https://github.com/s0md3v/XSStrike && cd XSStrike && python -m venv venv && source venv/bin/activate && pip install -r requirements.txt
Crawling
Start crawling from the target webpage for targets and test them.
python xsstrike.py -u "http://example.com/page.php" --crawl
#Crawling depth default: 2
Crawling depth Option: -l
or --level | Default: 2
. This option lets you specify the depth of crawling.
python xsstrike.py -u "http://example.com/page.php" --crawl -l 3
Authenticated and reporting
python xsstrike.py -u "https://DOMAIN.COM/" --crawl -l 3 --headers "Cookie: OAUTH2_STATE=eyJyZ..." --log-file xsstrike.output
Crawling & Proxy
Start crawling from the target webpage for targets and test them.
python xsstrike.py -u "http://example.com/page.php" --crawl
Using Proxies
Option: --proxy
(Default 0.0.0.0:8080
)
You have to set up your prox(y|ies) in core/config.py
and then you can use the --proxy
switch to use them whenever you want. More information on setting up proxies can be found here.
python xsstrike.py -u "http://example.com/search.php?q=query" --proxy

Gobuster
Repository: https://github.com/OJ/gobuster
Info: Directory/file & DNS busting tool written in Go Gobuster is a tool used to brute-force:
URIs (directories and files) in web sites.
DNS subdomains (with wildcard support).
Installation
apt-get install gobuster
Examples
gobuster dir -u "https://DOMAIN.COM" -c "Cookie: OAUTH2_STATE=eyJyZW..." -d -e -r -k --random-agent -o gobuster.report -v -w -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-big.txt
# With Proxy
gobuster -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" -e -fw -r -u domain.com -p http://127.0.0.1:8081 -v -w -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-big.txt
# Without Proxy
gobuster -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" -e -f
subbrute
Repo: https://github.com/TheRook/subbrute
Info: SubBrute is a community driven project with the goal of creating the fastest, and most accurate subdomain enumeration tool. Some of the magic behind SubBrute is that it uses open resolvers as a kind of proxy to circumvent DNS rate-limiting (https://www.us-cert.gov/ncas/alerts/TA13-088A). This design also provides a layer of anonymity, as SubBrute does not send traffic directly to the target's name servers.
LinkFinder
A python script that finds endpoints in JavaScript files
Search in a URL, file or folder
python /LinkFinder/linkfinder.py -i https://example.com/1.js -o results.html
python /LinkFinder/linkfinder.py -i file.txt -o results.html
Analyzing an entire domain
python /LinkFinder/linkfinder.py -i https://example.com -d -o results.html
psychoPATH
Repo: https://github.com/ewilded/psychopath
Info: This tool is a highly configurable payload generator detecting LFI & web root file uploads. Involves advanced path traversal evasive techniques, dynamic web root list generation, output encoding, site map-searching payload generator, LFI mode, nix & windows support plus single byte generator.
SVNDigger
Repo: https://www.netsparker.com/s/research/SVNDigger.zip
Info: Initially we needed to find lots of public SVN/CSV. So far we only used Google Code and Sourceforge. We did filtered search such as "Only PHP" or "Only ASP" projects. After this we used FSF (Freakin' Simple Fuzzer) to scrape, it was a one liner.
After we had the list of all open source projects, we wrote couple of simple batch files to start getting list of files via SVN and CVS clients.
When all finished, we coded a small client to analyse the all repository outputs and load them into an SQL Server database. Later on we applied many filters with yet another small script and generated all these different wordlists to use in different scenarios.
RobotsDisallowed
Repo: https://github.com/danielmiessler/RobotsDisallowed
Info: The RobotsDisallowed project is a harvest of the Disallowed directories from the robots.txt files of the world's top websites--specifically the Alexa 100K.
This list of Disallowed directories is a great way to supplement content discovery during a web security assessment, since the website owner is basically saying "Don't go here; there's sensitive stuff in there!".
It's basically a list of potential high-value targets.
Parameth
Repo: https://github.com/maK-/parameth
Info: This tool can be used to brute discover GET and POST parameters
Often when you are busting a directory for common files, you can identify scripts (for example test.php) that look like they need to be passed an unknown parameter. This hopefully can help find them.
The -off flag allows you to specify an offset (helps with dynamic pages) so for example, if you were getting alternating response sizes of 4444 and 4448, set the offset to 5 and it will only show the stuff outside the norm

Last updated