Crawling/Spidering & Fuzzing
Last updated
Was this helpful?
Last updated
Was this helpful?
Crawling/Spidering is the automatic process of exploring a website (navigating the different links on each page) to list all the resources encountered along the way and create a map of them.
Crawling and Spidering
katana -u <SITE1>,<SITE2>,...
HTB's Custom Scrapy Spider
pip3 install scrapy
python3 ReconSpider.py <SITE>
(results.json)
Fuzzing attempts to locate, through brute force, vhosts, subdomains, files, and hidden paths that are not directly accessible from the site.
With -recursion
you cannot specify wordlist name, you have to use FUZZ
.
dirsearch -u <SITE>
gobuster [dir/dns/fuzz/vhost/...] -h
feroxbuster -u <SITE>
java -jar iis_shortname_scanner.jar 0 5 http://<TARGET>/
If does not permit GET
access, brute-forcing of the remaining filename.
egrep -r ^<START_STRING> /usr/share/wordlists/* | sed 's/^[^:]*://' > /tmp/list.txt
gobuster dir -u http://<TARGET>/ -w /tmp/list.txt -x .aspx,.asp
Scanners for IIS short filename (8.3) disclosure vulnerability (). Short filename consisting in eight characters for the file name, a period, and three characters for the extension.