Crawling/Spidering & Fuzzing

Crawling/Spidering

Crawling/Spidering is the automatic process of exploring a website (navigating the different links on each page) to list all the resources encountered along the way and create a map of them.

Tools
Details

Crawling and Spidering katana -u <SITE1>,<SITE2>,...

HTB's Custom Scrapy Spider pip3 install scrapy python3 ReconSpider.py <SITE> (results.json)

Fuzzing

Fuzzing attempts to locate, through brute force, vhosts, subdomains, files, and hidden paths that are not directly accessible from the site.

ffuf -ic -w <WORDLIST>:X -u <URL>/X
ffuf -ic -v -recursion -recursion-depth <N> -w <WORDLIST> -u <URL>/FUZZ

With -recursion you cannot specify wordlist name, you have to use FUZZ.

Other

Tools
Details

dirsearch -u <SITE>

gobuster [dir/dns/fuzz/vhost/...] -h

feroxbuster -u <SITE>

Scanners for IIS short filename (8.3) disclosure vulnerability (Install Oracle Java). Short filename consisting in eight characters for the file name, a period, and three characters for the extension.

ex

In SecretDocuments/ there are somefile.txt and somefile1.txt /secret~1/somefi~1.txt for somefile.txt /secret~1/somefi~2.txt for somefile1.txt

java -jar iis_shortname_scanner.jar 0 5 http://<TARGET>/ If does not permit GET access, brute-forcing of the remaining filename. egrep -r ^<START_STRING> /usr/share/wordlists/* | sed 's/^[^:]*://' > /tmp/list.txt gobuster dir -u http://<TARGET>/ -w /tmp/list.txt -x .aspx,.asp

Last updated

Was this helpful?