Commit 282f2f8d authored by Devon Kearns's avatar Devon Kearns

Imported Upstream version 1.0

parents
06/21/2008
- rewrote BASIC AUTHENTICATION and COOKIE support - working & tested
09/06/2008
- applied various bugfixes for UNICODE/ASCII encoding and HTTP 500 reporting (lswww.py.patch, powerfuzzer-HTTPError-500-take2.patch, powerfuzzer.py.patch). Thanks for submitting your patches.
This diff is collapsed.
Use getcookie.py.
Usage: python getcookie.py <cookie_file> <url_with_form>
It will dump the cookie to the file. After getting the cookie set Powerfuzzer to use it (Cookie button in the GUI)
Cookies are save in LWP format. (LWPCookieJar)
See SAMPLE_COOKIE.txt
\ No newline at end of file
Powerfuzzer is a highly automated web fuzzer based on many other Open Source fuzzers available (incl. cfuzzer, fuzzled, fuzzer.pl, jbrofuzz, webscarab, wapiti, Socket Fuzzer) and information gathered from numerous security resources and websites. It is capable of spidering website and identifying inputs.
Currently, it is capable of identifying these problems:
- Cross Site Scripting (XSS)
- Injections (SQL, LDAP, code, commands, and XPATH)
- CRLF
- HTTP 500 statuses (usually indicative of a possible misconfiguration/security flaw incl. buffer overflow)
#LWP-Cookies-2.0
Set-Cookie3: SID=a0b498e88f488dd8a48baf6778da85b9; path="/"; domain="test.com"; path_spec; discard; version=0
TODO:
-add GUI to getcookie.py (incorporate into pf GUI?)
-add custom check field to GUI (you can specify parameters that should be passed to fuzzer module in the GUI interface)
-modularize checks perfomed by the scanning engine, so that users can add their customized checks/modules/plugins
-add threading to scanning engine (for super fast scanning)
-improve GUI/reporting
-documentation/tutorials
#!/usr/bin/env python
# Powerfuzzer
# Copyright (C) 2008 Marcin Kozlowski
# Parts taken from get_cookie.py from Wapiti by Nicolas Surribas
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 US
import urllib, urllib2, urlparse, cookielib
import sys, socket, lswww, HTMLParser
try:
import tidy
except ImportError:
tidyhere = 0
else:
tidyhere = 1
if len(sys.argv)!=3:
sys.stderr.write("Usage: python getcookie.py <cookie_file> <url_with_form>\n")
sys.exit(1)
COOKIEFILE = sys.argv[1]
url=sys.argv[2]
# Some websites/webapps like Webmin send a first cookie to see if the browser support them
# so we must collect these test-cookies durring authentification.
cj = cookielib.LWPCookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
current=url.split("#")[0]
current=current.split("?")[0]
currentdir="/".join(current.split("/")[:-1])+"/"
proto=url.split("://")[0]
agent = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'}
req=urllib2.Request(url)
socket.setdefaulttimeout(6)
try:
fd=urllib2.urlopen(req)
except IOError:
print "Error getting url"
sys.exit(1)
try:
htmlSource=fd.read()
except socket.timeout:
print "Error fetching page"
sys.exit(1)
p=lswww.linkParser()
try:
p.feed(htmlSource)
except HTMLParser.HTMLParseError,err:
if tidyhere==1:
options = dict(output_xhtml=1, add_xml_decl=1, indent=1, tidy_mark=0)
htmlSource=str(tidy.parseString(htmlSource,**options))
try:
p.reset()
p.feed(htmlSource)
except HTMLParser.HTMLParseError,err:
pass
if len(p.forms)==0:
print "No forms found in this page !"
sys.exit(1)
myls=lswww.lswww(url,box=0,timeToQuit=0)
i=0
nchoice=0
if len(p.forms)>1:
print "Choose the form you want to use :"
for form in p.forms:
print
print "%d) %s" % (i,myls.correctlink(form[0],current,currentdir,proto))
for field,value in form[1].items():
print "\t"+field+" ("+value+")"
i=i+1
ok=False
while ok==False:
choice=raw_input("Enter a number : ")
if choice.isdigit():
nchoice=int(choice)
if nchoice<i and nchoice>=0:
ok=True
form=p.forms[nchoice]
print "Please enter values for the folling form :"
print "url = "+myls.correctlink(form[0],current,currentdir,proto)
d={}
for field,value in form[1].items():
str=raw_input(field+" ("+value+") : ")
d[field]=str
form[1].update(d)
url=myls.correctlink(form[0],current,currentdir,proto)
server=urlparse.urlparse(url)[1]
script=urlparse.urlparse(url)[2]
if urlparse.urlparse(url)[4]!="":
script+="?"+urlparse.urlparse(url)[4]
params=urllib.urlencode(form[1])
txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)',
'Referer' : sys.argv[2]}
try:
req = urllib2.Request(url, params, txheaders)
handle = urllib2.urlopen(req)
except IOError, e:
print "Error getting URL:",url
sys.exit(1)
for index, cookie in enumerate(cj):
print index,':',cookie
cj.save(COOKIEFILE,ignore_discard=True)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment