Endpoint Discovery
Identifies active devices and services in a web application, helping map the environment and find potential targets.
Last updated
Identifies active devices and services in a web application, helping map the environment and find potential targets.
Last updated
Repo:
By using the -r | --recursive
argument, dirsearch will automatically brute-force the after of directories that it found.
You can set the max recursion depth with -R
or --recursion-depth
Info: DIRB is a Web Content Scanner. It looks for existing (and/or hidden) Web Objects. It basically works by launching a dictionary based attack against a web server and analizing the response.
DIRB comes with a set of preconfigured attack wordlists for easy usage but you can use your custom wordlists. Also DIRB sometimes can be used as a classic CGI scanner, but remember is a content scanner not a vulnerability scanner.
DIRB main purpose is to help in professional web application auditing. Specially in security related testing. It covers some holes not covered by classic web vulnerability scanners. DIRB looks for specific web objects that other generic CGI scanners can't look for. It doesn't search vulnerabilities nor does it look for web contents that can be vulnerables.
Info: Wfuzz has been created to facilitate the task in web applications assessments and it is based on a simple concept: it replaces any reference to the FUZZ keyword by the value of a given payload.
Info: Find web directories without bruteforce. Dirhunt is a web crawler optimize for search and analyze directories. This tool can find interesting things if the server has the "index of" mode enabled. Dirhunt is also useful if the directory listing is not enabled. It detects directories with false 404 errors, directories where an empty index file has been created to hide things and much more.
Start crawling from the target webpage for targets and test them.
Crawling depth Option: -l
or --level | Default: 2
. This option lets you specify the depth of crawling.
Start crawling from the target webpage for targets and test them.
Option: --proxy
(Default 0.0.0.0:8080
)
You have to set up your prox(y|ies) in core/config.py
and then you can use the --proxy
switch to use them whenever you want. More information on setting up proxies can be found here.
Info: Directory/file & DNS busting tool written in Go Gobuster is a tool used to brute-force:
URIs (directories and files) in web sites.
DNS subdomains (with wildcard support).
Info: SubBrute is a community driven project with the goal of creating the fastest, and most accurate subdomain enumeration tool. Some of the magic behind SubBrute is that it uses open resolvers as a kind of proxy to circumvent DNS rate-limiting (https://www.us-cert.gov/ncas/alerts/TA13-088A). This design also provides a layer of anonymity, as SubBrute does not send traffic directly to the target's name servers.
A python script that finds endpoints in JavaScript files
Info: This tool is a highly configurable payload generator detecting LFI & web root file uploads. Involves advanced path traversal evasive techniques, dynamic web root list generation, output encoding, site map-searching payload generator, LFI mode, nix & windows support plus single byte generator.
Info: Initially we needed to find lots of public SVN/CSV. So far we only used Google Code and Sourceforge. We did filtered search such as "Only PHP" or "Only ASP" projects. After this we used FSF (Freakin' Simple Fuzzer) to scrape, it was a one liner.
After we had the list of all open source projects, we wrote couple of simple batch files to start getting list of files via SVN and CVS clients.
When all finished, we coded a small client to analyse the all repository outputs and load them into an SQL Server database. Later on we applied many filters with yet another small script and generated all these different wordlists to use in different scenarios.
Info: The RobotsDisallowed project is a harvest of the Disallowed directories from the robots.txt files of the world's top websites--specifically the Alexa 100K.
This list of Disallowed directories is a great way to supplement content discovery during a web security assessment, since the website owner is basically saying "Don't go here; there's sensitive stuff in there!".
It's basically a list of potential high-value targets.
Info: This tool can be used to brute discover GET and POST parameters
Often when you are busting a directory for common files, you can identify scripts (for example test.php) that look like they need to be passed an unknown parameter. This hopefully can help find them.
The -off flag allows you to specify an offset (helps with dynamic pages) so for example, if you were getting alternating response sizes of 4444 and 4448, set the offset to 5 and it will only show the stuff outside the norm
Repo:
Repo:
Repository:
Repository:
Usage:
Repository:
Repo:
Repository:
Repo:
Repo:
Repo:
Repo: