Pentest & Bug Bounty Resources and Techniques
  • Pentest & Bug Bounty Resources and Techniques
    • Introduction
    • Tests Checklist
    • OSINT
    • Communications Security
      • SSL/TLS
    • Networking
      • Subdomains Discovery
        • DNS & OSINT
        • DNS Brute force
          • Second DNS Brute-Force Round
      • Subdomain Takeover
      • Network Host Scan/Discovery
        • External/Internal
        • Internal only
      • Network Vulnerability Scanning
      • Network Hacking
      • Parsing
      • Reporting
    • Brute Force
      • Wordlists
      • Databases
      • SSH
    • Web
      • Endpoint Discovery
      • Infrastructure & Configuration
        • Headers
        • WAF Detection/ Evasion
      • Injection
        • GraphQL
        • Cross-Site Scripting (XSS)
        • SQL Injection
        • Payloads
      • SSRF & XXE
        • Labs & Resources
        • Tools
        • SVG SSRF Cheatsheet
        • XXE - XEE - XML External Entity
      • JWT Vulnerabilities (Json Web Tokens)
      • HTTP/S DoS
    • Mobile
      • Both
        • SAST
          • MobSF
        • DAST
          • Installing Frida and Objection
      • Android
        • Create a Lab
          • Rooting Android Emulator
          • Rooting Android Emulator Cheat Sheet
        • APK Certificates
        • SAST
          • APKs
            • Get Information from APK
            • GDA (GJoy Dex Analysizer)
            • Scanning APK for URIs, endpoints & secrets
            • Google Maps API Scanner
        • DAST
          • Rooting the Android Studio AVDs
          • non-Rooted devices
            • Bypass SSL Pinning - non-rooted devices
              • Method 1: apk-mitm
              • Instrumentation with Frida and Objection
                • Bypass SSL Pinning - Method 2: With Objection Explore
                • Bypass SSL Pinning - Method 3: With root_bypass.js
          • Rooted Devices
            • Run frida-server in the emulator or device
            • Inject Frida
            • Bypass SSL Pinning - rooted devices
              • Install Burp CA as a system-level CA on the device
      • iOS
        • SAST
          • Building a reverse iOS engineering environment for free
          • Test Vulnerabilities
  • Lets Practice
    • Virtual Machines
    • Vulnerable App
    • Guided Labs
    • CTFs
  • Group 1
    • AI
Powered by GitBook
On this page
  • dirsearch
  • Installation & Usage
  • Prerequisites
  • Usage
  • dirb
  • Examples
  • wfuzz
  • Examples
  • Dirhunt
  • Installation
  • Examples
  • With credentials and reporting
  • XSStrike
  • Installation
  • Crawling
  • Authenticated and reporting
  • Crawling & Proxy
  • Using Proxies
  • Gobuster
  • Installation
  • Examples
  • subbrute
  • LinkFinder
  • Search in a URL, file or folder
  • Analyzing an entire domain
  • psychoPATH
  • SVNDigger
  • RobotsDisallowed
  • Parameth
  1. Pentest & Bug Bounty Resources and Techniques
  2. Web

Endpoint Discovery

Identifies active devices and services in a web application, helping map the environment and find potential targets.

PreviousWebNextInfrastructure & Configuration

Last updated 2 months ago

dirsearch

Repo:

Installation & Usage

git clone https://github.com/maurosoria/dirsearch.git
cd dirsearch
python3 -m venv myenv; source myenv/bin/activate
pip3 install -r requirements.txt
python3 dirsearch.py -u <URL> -e <EXTENSIONS>

Prerequisites

apt install seclists dirsearch

Usage

Combined command

domain=domain.com; dirsearch -u https://"${domain}" -r -o ./"${domain}".dirsearch -w /usr/share/seclists/Discovery/Web-Content/directory-list-lowercase-2.3-big.txt,/usr/share/seclists/Discovery/Web-Content/common.txt,</PATH/DICTIONARY.txt> -x 403,400,429,500 --full-url --auth-type=bearer --auth=<eyJh...> --proxy 127.0.0.1:8080

Authenticating with Bearer Token

domain=domain.com; dirsearch -u https://"${domain}" -r -o ./"${domain}".dirsearch --log="${domain}".dirsearch.log -w /usr/share/seclists/Discovery/Web-Content/directory-list-lowercase-2.3-big.txt,/usr/share/seclists/Discovery/Web-Content/common.txt -x 403,400,429,500 --full-url --cookie="AW..."

Authenticating with Cookie

Simple

python3 dirsearch.py -u "https://target"

Excluding Extensions

python3 dirsearch.py -e php,html,js -u https://target

Recursive scan

By using the -r | --recursive argument, dirsearch will automatically brute-force the after of directories that it found.

python3 dirsearch.py -e php,html,js -u https://target -r

You can set the max recursion depth with -R or --recursion-depth

python3 dirsearch.py -e php,html,js -u https://target -r -R 3

dirb

Info: DIRB is a Web Content Scanner. It looks for existing (and/or hidden) Web Objects. It basically works by launching a dictionary based attack against a web server and analizing the response.

DIRB comes with a set of preconfigured attack wordlists for easy usage but you can use your custom wordlists. Also DIRB sometimes can be used as a classic CGI scanner, but remember is a content scanner not a vulnerability scanner.

DIRB main purpose is to help in professional web application auditing. Specially in security related testing. It covers some holes not covered by classic web vulnerability scanners. DIRB looks for specific web objects that other generic CGI scanners can't look for. It doesn't search vulnerabilities nor does it look for web contents that can be vulnerables.

Examples

domain=domain.com; dirb https://"${domain}" /dir/wordlist.txt -w -o "${domain}".dirb -l -i
dirb https://domain.com /dir/wordlist.txt -w -o output.txt -l -i

# Ignore 403 error code
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/big.txt -w -o "${domain}".dirb -l -i -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" -N 403
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/merged.txt -w -o "${domain}".dirb -l -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0"
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/big.txt -w -o "${domain}".dirb -l -i -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"

# With Cookies
domain=domain.com; dirb https://"${domain}" /usr/share/seclists/Discovery/Web-Content/big.txt -w -o "${domain}".dirb -l -i -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36" -c "name1=value1; name2=value2" -N 403

wfuzz

Info: Wfuzz has been created to facilitate the task in web applications assessments and it is based on a simple concept: it replaces any reference to the FUZZ keyword by the value of a given payload.

Examples

# With recursion 
wfuzz -c -R 10 -w /usr/share/wfuzz/wordlist/general/megabeast.txt http://www.domain.com/FUZZ

# Discovery content and hide 400, 403 and 404 http error codes
wfuzz -L -A --hc 404,403,400 -X FUZZ --oF domain.com.wfuzz.output -R 10 -w /usr/share/wfuzz/wordlist/general/megabeast.txt -u https://domain.com

Dirhunt

Info: Find web directories without bruteforce. Dirhunt is a web crawler optimize for search and analyze directories. This tool can find interesting things if the server has the "index of" mode enabled. Dirhunt is also useful if the directory listing is not enabled. It detects directories with false 404 errors, directories where an empty index file has been created to hide things and much more.

Installation

python -m venv venv; source venv/bin/activate
sudo pip3 install dirhunt

Examples

dirhunt http://website.com/
dirhunt http://website.com/ > directories.txt

Usage: http://docs.nekmo.org/dirhunt/usage.html

With credentials and reporting

dirhunt <https://domain.com> --to-file dirhunt.json -h "Cookie: OAUTH2_STATE=eyJyZWRpcmVj..." --progress-enabled

XSStrike

Installation

git clone https://github.com/s0md3v/XSStrike && cd XSStrike && python -m venv venv && source venv/bin/activate && pip install -r requirements.txt

Crawling

Start crawling from the target webpage for targets and test them.

python xsstrike.py -u "http://example.com/page.php" --crawl
#Crawling depth default: 2

Crawling depth Option: -l or --level | Default: 2. This option lets you specify the depth of crawling.

python xsstrike.py -u "http://example.com/page.php" --crawl -l 3

Authenticated and reporting

python xsstrike.py -u "https://DOMAIN.COM/" --crawl -l 3 --headers "Cookie: OAUTH2_STATE=eyJyZ..." --log-file xsstrike.output

Crawling & Proxy

Start crawling from the target webpage for targets and test them.

python xsstrike.py -u "http://example.com/page.php" --crawl

Using Proxies

Option: --proxy (Default 0.0.0.0:8080)

You have to set up your prox(y|ies) in core/config.py and then you can use the --proxy switch to use them whenever you want. More information on setting up proxies can be found here.

python xsstrike.py -u "http://example.com/search.php?q=query" --proxy

Gobuster

Info: Directory/file & DNS busting tool written in Go Gobuster is a tool used to brute-force:

  • URIs (directories and files) in web sites.

  • DNS subdomains (with wildcard support).

Installation

apt-get install gobuster

Examples

gobuster dir -u "https://DOMAIN.COM" -c "Cookie: OAUTH2_STATE=eyJyZW..." -d -e -r -k --random-agent -o gobuster.report -v -w  -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-big.txt

  • -d: Also search for backup files by appending multiple backup extensions

  • -e: Expanded mode, print full URLs

  • -r: Follow redirects

  • -k: Skip TLS certificate verification

  • --random-agent: Use a random User-Agent string

  • -o: Output file to write results to (defaults to stdout)

  • -v: Verbose output (errors)

# With Proxy
gobuster -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" -e -fw -r -u domain.com -p http://127.0.0.1:8081 -v -w  -w /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-big.txt

# Without Proxy
gobuster -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" -e -f

subbrute

Info: SubBrute is a community driven project with the goal of creating the fastest, and most accurate subdomain enumeration tool. Some of the magic behind SubBrute is that it uses open resolvers as a kind of proxy to circumvent DNS rate-limiting (https://www.us-cert.gov/ncas/alerts/TA13-088A). This design also provides a layer of anonymity, as SubBrute does not send traffic directly to the target's name servers.


LinkFinder

A python script that finds endpoints in JavaScript files

Search in a URL, file or folder

python /LinkFinder/linkfinder.py -i https://example.com/1.js -o results.html
python /LinkFinder/linkfinder.py -i file.txt -o results.html

For folders a wildcard can be used (e.g. '/*.js').

Analyzing an entire domain

python /LinkFinder/linkfinder.py -i https://example.com -d -o results.html

Enumerates over all found JS files.


psychoPATH

Info: This tool is a highly configurable payload generator detecting LFI & web root file uploads. Involves advanced path traversal evasive techniques, dynamic web root list generation, output encoding, site map-searching payload generator, LFI mode, nix & windows support plus single byte generator.


SVNDigger

Info: Initially we needed to find lots of public SVN/CSV. So far we only used Google Code and Sourceforge. We did filtered search such as "Only PHP" or "Only ASP" projects. After this we used FSF (Freakin' Simple Fuzzer) to scrape, it was a one liner.

After we had the list of all open source projects, we wrote couple of simple batch files to start getting list of files via SVN and CVS clients.

When all finished, we coded a small client to analyse the all repository outputs and load them into an SQL Server database. Later on we applied many filters with yet another small script and generated all these different wordlists to use in different scenarios.


RobotsDisallowed

Info: The RobotsDisallowed project is a harvest of the Disallowed directories from the robots.txt files of the world's top websites--specifically the Alexa 100K.

This list of Disallowed directories is a great way to supplement content discovery during a web security assessment, since the website owner is basically saying "Don't go here; there's sensitive stuff in there!".

It's basically a list of potential high-value targets.


Parameth

Info: This tool can be used to brute discover GET and POST parameters

Often when you are busting a directory for common files, you can identify scripts (for example test.php) that look like they need to be passed an unknown parameter. This hopefully can help find them.

The -off flag allows you to specify an offset (helps with dynamic pages) so for example, if you were getting alternating response sizes of 4444 and 4448, set the offset to 5 and it will only show the stuff outside the norm

Repo:

Repo:

Repository:

Repository:

Usage:

Repository:

Repo:

Repository:

Repo:

Repo:

Repo:

Repo:

https://github.com/maurosoria/dirsearch
http://dirb.sourceforge.net/
https://github.com/xmendez/wfuzz
https://github.com/Nekmo/dirhunt
https://github.com/s0md3v/XSStrike/
https://github.com/s0md3v/XSStrike/wiki/Usage
https://github.com/OJ/gobuster
https://github.com/TheRook/subbrute
https://github.com/GerbenJavado/LinkFinder
https://github.com/ewilded/psychopath
https://www.netsparker.com/s/research/SVNDigger.zip
https://github.com/danielmiessler/RobotsDisallowed
https://github.com/maK-/parameth