Arachni

From aldeid
Jump to navigation Jump to search

Description

Arachni is a Web Application Security Scanner Framework developed in Ruby and compatible with all *nix systems and Cygwin. It has been developed and maintained by Tasos "Zapotek" Laskos.

Some of the Arachni's characteristics:

  • Smart: it trains itself by learning from the HTTP responses it receives during the audit process
  • Easy: Ease of use via a simple CLI
  • Fast: Arachni yields great performance due to its asynchronous HTTP model (courtesy of Typhoeus). Thus, you'll only be limited by the responsiveness of the server under audit and your available bandwidth.

This tutorial has been tested on Ubuntu 10.10 and Kubuntu 10.04.

Warning
For *ubuntu 10.04, you will need to upgrade rubygems to version 1.3.7. Please refer to this help page: http://rubygems.org/pages/download.

Installation

Prerequisites

  • Ruby Packages
$ sudo apt-get install libxml2-dev libxslt1-dev libcurl4-openssl-dev \
ruby1.9.1-full rubygems1.9.1
  • Ruby extensions
$ sudo gem1.9.1 install nokogiri anemone typhoeus socksify awesome_print liquid yard
  • You will also need git to be able to download Arachni:
$ sudo apt-get instal git-core

Arachni

Following procedure will install Arachni v0.2.1 in /pentest/scanners/arachni/:

$ mkdir -p /pentest/scanners/
$ cd /pentest/scanners/
$ git clone https://github.com/Zapotek/arachni.git
$ cd arachni/

Usage

$ ruby1.9.1 arachni.rb [options] url

Options

General

-h, --help
show help and exit
-v
be verbose
--debug
show debug information
--only-positives
echo positive results *only*
--http-req-limit
concurent HTTP requests limit
(Be carefull not to kill your server.)
(Default: 200)
(NOTE: If your scan seems unresponsive try lowering the limit.)
--http-harvest-last
build up the HTTP request queue of the audit for the whole site and harvest the HTTP responses at the end of the crawl.
(Default: responses will be harvested for each page)
(*NOTE*: If you are scanning a high-end server and you are using a powerful machine with enough bandwidth *and* you feel dangero us you can use this flag with an increased '--http-req-limit' to get maximum performance out of your scan.)
(*WARNING*: When scanning large websites with hundreads of pages this could eat up all your memory pretty quickly.)
--cookie-jar=<cookiejar>
netscape HTTP cookie file, use curl to create it
--user-agent=<user agent>
specify user agent
--authed-by=<who>
who authorized the scan, include name and e-mail address
(It'll make it easier on the sys-admins during log reviews.)
(Will be appended to the user-agent string.)

Profiles

--save-profile=<file>
save the current run profile/options to <file>
(The file will be saved with an extention of .afp)
--load-profile=<file>
load a run profile from <file>
(Can be used multiple times.)
(You can complement it with more options, except for --mods and --redundant)
--show-profile
will output the running profile as CLI arguments

Crawler

-e <regex>, --exclude=<regex>
exclude urls matching regex
(Can be used multiple times.)
-i <regex>, --include=<regex>
include urls matching this regex only
(Can be used multiple times.)
--redundant=<regex>:<count>
limit crawl on redundant pages like galleries or catalogs
(URLs matching <regex> will be crawled <count> links deep.)
(Can be used multiple times.)
-f, --follow-subdomains
follow links to subdomains
(default: off)
--obey-robots-txt
obey robots.txt file
(default: off)
--depth=<number>
depth limit
(default: inf)
(How deep Arachni should go into the site structure.)
--link-count=<number>
how many links to follow
(default: inf)
--redirect-limit=<number>
how many redirects to follow
(default: inf)

Auditor

-g, --audit-links
audit link variables (GET)
-p, --audit-forms
audit form variables (usually POST, can also be GET)
-c, --audit-cookies
audit cookies (COOKIE)
--exclude-cookie=<name>
cookies not to audit
(You should exclude session cookies.)
(Can be used multiple times.)
--audit-headers
audit HTTP headers
(*NOTE*: Header audits use brute force. Almost all valid HTTP request headers will be audited even if there's no indication that the web app uses them.)
(*WARNING*: Enabling this option will result in increased requests, maybe by an order of magnitude.)

Modules

--lsmod=<regexp>
list available modules based on the provided regular expression
(If no regexp is provided all modules will be listed.)
(Can be used multiple times.)
-m <modname,modname..>, --mods=<modname,modname..>
comma separated list of modules to deploy
(Use '*' to deploy all modules)
(You can exclude modules by prefixing their name with a dash: --mods=*,-backup_files,-xss
The above will load all modules except for the 'backup_files' and 'xss' modules. )

Reports

--lsrep
list available reports
--repsave=<file>
save the audit results in <file>
(The file will be saved with an extention of .afr)
--repload=<file>
load audit results from <file>
(Allows you to create a new reports from old/finished scans.)
--repopts=<option1>:<value>,<option2>:<value>,...
Set options for the selected reports
(One invocation only, options will be applied to all loaded reports.)
--report=<repname>
<repname> represents the name of the report as displayed by '--lsrep'
(Default: stdout)
(Can be used multiple times.)

Proxy

--proxy=<server:port>
specify proxy
--proxy-auth=<user:passwd>
specify proxy auth credentials
--proxy-type=<type>
proxy type can be either socks or http
(Default: http)

Example

Complete audit

In the following example all modules will be run against http://test.com , auditing links/forms/cookies and following subdomains --with verbose output enabled. The results of the audit will be saved in the the file test.com.afr.

$ ./arachni.rb -gpcfv --mods=* http://test.com --repsave=test.com

Read report

The Arachni Framework Report (.afr) file can later be loaded by Arachni to create a report, like so:

$ ./arachni.rb --report=html --repload=test.com.afr --repsave=my_report

Here is an example of an output (audit against a fresh install of phpBB 3.0.7-PL1):

[1;32m [+] [1;00m Unencrypted password form.
[1;30m [~] [1;00m 22:20, 15 October 2010 (CEST)22:20, 15 October 2010 (CEST)22:20, 15 October 2010 (CEST)22:20, 15 October 2010 (CEST)
[1;30m [~] [1;00m URL:      http://localhost/forum/ucp.php
[1;30m [~] [1;00m Elements: form
[1;30m [~] [1;00m Variable: password
[1;30m [~] [1;00m Description: 
[1;30m [~] [1;00m Transmission of password does not use an encrypted channel.

[1;30m [~] [1;00m Requires manual verification?: false

[1;30m [~] [1;00m References:
[1;30m [~] [1;00m   OWASP Top 10 2010 - http://www.owasp.org/index.php/Top_10_2010-A9-Insufficient_Transport_Layer_Protection

[1;34m [*] [1;00m Variations
[1;30m [~] [1;00m ----------
[1;30m [~] [1;00m Variation 1:
[1;30m [~] [1;00m URL: http://localhost/forum/ucp.php?mode=login&sid=f682fafcbb0ad49a35dd7732407ee46f
[1;30m [~] [1;00m ID:  n/a
[1;30m [~] [1;00m Injected value:     n/a
[1;30m [~] [1;00m Regular expression: n/a
[1;30m [~] [1;00m Matched string:     n/a

[1;32m [+] [1;00m Cross-Site Request Forgery
[1;30m [~] [1;00m 22:20, 15 October 2010 (CEST)22:20, 15 October 2010 (CEST)22:20, 15 October 2010 (CEST)22:20, 15 October 2010 (CEST)
[1;30m [~] [1;00m URL:      http://localhost/forum/search.php
[1;30m [~] [1;00m Elements: form
[1;30m [~] [1;00m Variable: n/a
[1;30m [~] [1;00m Description: 
[1;30m [~] [1;00m The web application does not, or can not,
    sufficiently verify whether a well-formed, valid, consistent
    request was intentionally provided by the user who submitted the request. 

[1;30m [~] [1;00m Requires manual verification?: false

[1;30m [~] [1;00m References:
[1;30m [~] [1;00m   Wikipedia - http://en.wikipedia.org/wiki/Cross-site_request_forgery
[1;30m [~] [1;00m   OWASP - http://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)
[1;30m [~] [1;00m   CGI Security - http://www.cgisecurity.com/csrf-faq.html

List available reports

To list available reports, type:

$ ./arachni.rb --lsrep

Comments