Crawling/Spidering & Fuzzing
Crawling/Spidering
Crawling/Spidering is the automatic process of exploring a website (navigating the different links on each page) to list all the resources encountered along the way and create a map of them.
Crawling and Spidering
katana -u <SITE1>,<SITE2>,...
HTB's Custom Scrapy Spider
pip3 install scrapy
python3 ReconSpider.py <SITE>
(results.json)
Fuzzing
Fuzzing attempts to locate, through brute force, vhosts, subdomains, files, and hidden paths that are not directly accessible from the site.
ffuf -ic -w <WORDLIST>:X -u <URL>/X
ffuf -ic -v -recursion -recursion-depth <N> -w <WORDLIST> -u <URL>/FUZZ
With -recursion
you cannot specify wordlist name, you have to use FUZZ
.
Other
dirsearch -u <SITE>
gobuster [dir/dns/fuzz/vhost/...] -h
feroxbuster -u <SITE>
Scanners for IIS short filename (8.3) disclosure vulnerability (Install Oracle Java). Short filename consisting in eight characters for the file name, a period, and three characters for the extension.
java -jar iis_shortname_scanner.jar 0 5 http://<TARGET>/
If does not permit GET
access, brute-forcing of the remaining filename.
egrep -r ^<START_STRING> /usr/share/wordlists/* | sed 's/^[^:]*://' > /tmp/list.txt
gobuster dir -u http://<TARGET>/ -w /tmp/list.txt -x .aspx,.asp
Last updated
Was this helpful?