Tag: programming

  • Python Web Crawler Script

    spider_webHere’s a simple web crawling script that will go from one url and find all the pages it links to up to a pre-defined depth. Web crawling is of course the lowest level tool used by Google to create its multi-billion dollar business. You may not be able to compete with Google’s search technology but being able to crawl your own sites, or that of your competitors can be very valuable.

    You could for instance routinely check your websites to make sure that it is live and all the links are working. it could notify you of any 404 errors. By adding in a page rank check you could identify better linking strategies to boost your page rank scores. And you could identify possible leaks – paths a user could take that takes them away from where you want them to go.

    Here’s the script:

    # -*- coding: utf-8 -*-
    from HTMLParser import HTMLParser
    from urllib2 import urlopen
    
    class Spider(HTMLParser):
        def __init__(self, starting_url, depth, max_span):
            HTMLParser.__init__(self)
            self.url = starting_url
            self.db = {self.url: 1}
            self.node = [self.url]
    
            self.depth = depth # recursion depth max
            self.max_span = max_span # max links obtained per url
            self.links_found = 0
    
        def handle_starttag(self, tag, attrs):
            if self.links_found < self.max_span and tag == 'a' and attrs:
                link = attrs[0][1]
                if link[:4] != "http":
                    link = '/'.join(self.url.split('/')[:3])+('/'+link).replace('//','/')
    
                if link not in self.db:
                    print "new link ---> %s" % link
                    self.links_found += 1
                    self.node.append(link)
                self.db[link] = (self.db.get(link) or 0) + 1
    
        def crawl(self):
            for depth in xrange(self.depth):
                print "*"*70+("\nScanning depth %d web\n" % (depth+1))+"*"*70
                context_node = self.node[:]
                self.node = []
                for self.url in context_node:
                    self.links_found = 0
                    try:
                        req = urlopen(self.url)
                        res = req.read()
                        self.feed(res)
                    except:
                        self.reset()
            print "*"*40 + "\nRESULTS\n" + "*"*40
            zorted = [(v,k) for (k,v) in self.db.items()]
            zorted.sort(reverse = True)
            return zorted
    
    if __name__ == "__main__":
        spidey = Spider(starting_url = 'http://www.7cerebros.com.ar', depth = 5, max_span = 10)
        result = spidey.crawl()
        for (n,link) in result:
            print "%s was found %d time%s." %(link,n, "s" if n is not 1 else "")
    
  • Amazon Product Advertising API From Python

    Product Advertising APIAmazon has a very comprehensive associate program that allows you to promote just about anything imaginable for any niche and earn commission for anything you refer. The size of the catalog is what makes Amazon such a great program. People make some good money promoting Amazon products.

    There is a great Python library out there for accessing the other Amazon web services such as S3, and EC2 called boto. However it doesn’t support the Product Advertising API.

    With the Product Advertising API you have access to everything that you can read on the Amazon site about each product. This includes the product description, images, editor reviews, customer reviews and ratings. This is a lot of great information that you could easily find a good use for with your websites.

    So how do you get at this information from within a Python program? Well the complicated part is dealing with the authentication that Amazon has put in place. To make that a bit easier I used the connection component from boto.

    Here’s a demonstration snippet of code that will print out the top 10 best selling books on Amazon right now.

    Example Usage:

    $ python AmazonSample.py
    Glenn Becks Common Sense: The Case Against an Out-of-Control Government, Inspired by Thomas Paine by Glenn Beck
    Culture of Corruption: Obama and His Team of Tax Cheats, Crooks, and Cronies by Michelle Malkin
    The Angel Experiment (Maximum Ride, Book 1) by James Patterson
    The Time Travelers Wife by Audrey Niffenegger
    The Help by Kathryn Stockett
    South of Broad by Pat Conroy
    Paranoia by Joseph Finder
    The Girl Who Played with Fire by Stieg Larsson
    The Shack [With Headphones] (Playaway Adult Nonfiction) by William P. Young
    The Girl with the Dragon Tattoo by Stieg Larsson
    

    To use this code you’ll need an Amazon associate account and fill out the keys and tag needed for authentication.

    Product Advertising API Python code:

    #!/usr/bin/env python
    # encoding: utf-8
    """
    AmazonExample.py
    
    Created by Matt Warren on 2009-08-17.
    Copyright (c) 2009 HalOtis.com. All rights reserved.
    """
    
    import urllib
    try:
        from xml.etree import ET
    except ImportError:
        from elementtree import ET
        
    from boto.connection import AWSQueryConnection
    
    AWS_ACCESS_KEY_ID = 'YOUR ACCESS KEY'
    AWS_ASSOCIATE_TAG = 'YOUR TAG'
    AWS_SECRET_ACCESS_KEY = 'YOUR SECRET KEY'
    
    def amazon_top_for_category(browseNodeId):
        aws_conn = AWSQueryConnection(
            aws_access_key_id=AWS_ACCESS_KEY_ID,
            aws_secret_access_key=AWS_SECRET_ACCESS_KEY, is_secure=False,
            host='ecs.amazonaws.com')
        aws_conn.SignatureVersion = '2'
        params = dict(
            Service='AWSECommerceService',
            Version='2009-07-01',
            SignatureVersion=aws_conn.SignatureVersion,
            AWSAccessKeyId=AWS_ACCESS_KEY_ID,
            AssociateTag=AWS_ASSOCIATE_TAG,
            Operation='ItemSearch',
            BrowseNode=browseNodeId,
            SearchIndex='Books',
            ResponseGroup='ItemAttributes,EditorialReview',
            Order='salesrank',
            Timestamp=time.strftime("%Y-%m-%dT%H:%M:%S", time.gmtime()))
        verb = 'GET'
        path = '/onca/xml'
        qs, signature = aws_conn.get_signature(params, verb, path)
        qs = path + '?' + qs + '&Signature=' + urllib.quote(signature)
        response = aws_conn._mexe(verb, qs, None, headers={})
        tree = ET.fromstring(response.read())
        
        NS = tree.tag.split('}')[0][1:]
    
        for item in tree.find('{%s}Items'%NS).findall('{%s}Item'%NS):
            title = item.find('{%s}ItemAttributes'%NS).find('{%s}Title'%NS).text
            author = item.find('{%s}ItemAttributes'%NS).find('{%s}Author'%NS).text
            print title, 'by', author
    
    if __name__ == '__main__':
        amazon_top_for_category(1000) #Amazon category number for US Books
    
  • Scrape Google Search Results Page

    1_google_logoHere’s a short script that will scrape the first 100 listings in the Google Organic results.

    You might want to use this to find the position of your sites and track their position for certain target keyword phrases over time. That could be a very good way to determine, for example, if your SEO efforts are working. Or you could use the list of URLs as a starting point for some other web crawling activity

    As the script is written it will just dump the list of URLs to a txt file.

    It uses the BeautifulSoup library to help with parsing the HTML page.

    Example Usage:

    $ python GoogleScrape.py
    $ cat links.txt
    http://www.halotis.com/
    http://www.halotis.com/2009/07/01/rss-twitter-bot-in-python/
    http://www.blogcatalog.com/blogs/halotis.html
    http://www.blogcatalog.com/topic/sqlite/
    http://ieeexplore.ieee.org/iel5/10358/32956/01543043.pdf?arnumber=1543043
    http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1543043
    http://doi.ieeecomputersociety.org/10.1109/DATE.2001.915065
    http://rapidlibrary.com/index.php?q=hal+otis
    http://www.tagza.com/Software/Video_tutorial_-_URL_re-directing_software-___HalOtis/
    http://portal.acm.org/citation.cfm?id=367328
    http://ag.arizona.edu/herbarium/db/get_taxon.php?id=20605&show_desc=1
    http://www.plantsystematics.org/taxpage/0/genus/Halotis.html
    http://www.mattwarren.name/
    http://www.mattwarren.name/2009/07/31/net-worth-update-3-5/
    http://newweightlossdiet.com/privacy.php
    http://www.ingentaconnect.com/content/nisc/sajms/1988/00000006/00000001/art00002?crawler=true
    http://www.ingentaconnect.com/content/nisc/sajms/2000/00000022/00000001/art00013?crawler=true
    

    Click to access etm69yghjva13xlh.pdf

    Click to access b7fytc095bc57x59.pdf

    ...... $

    Here’s the script:

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    # (C) 2009 HalOtis Marketing
    # written by Matt Warren
    # http://halotis.com/
    
    import urllib,urllib2
    
    from BeautifulSoup import BeautifulSoup
    
    def google_grab(query):
    
        address = "http://www.google.com/search?q=%s&num=100&hl=en&start=0" % (urllib.quote_plus(query))
        request = urllib2.Request(address, None, {'User-Agent':'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)'} )
        urlfile = urllib2.urlopen(request)
        page = urlfile.read(200000)
        urlfile.close()
        
        soup = BeautifulSoup(page)
        links =   [x['href'] for x in soup.findAll('a', attrs={'class':'l'})]
        
        return links
    
    if __name__=='__main__':
        # Example: Search written to file
        links = google_grab('halotis')
        open("links.txt","w+b").write("\n".join(links))
    
  • Targeting Twitter Trends Script

    I noticed that several accounts are spamming the twitter trends. Go to twitter.com and select one of the trends in the right column. You’ll undoubtedly see some tweets that are blatantly inserting words from the trending topics list into unrelated ads.

    I was curious just how easy it would be to get the trending topics to target them with tweets. Turns out it is amazingly simple and shows off some of the beauty of Python.

    This script doesn’t actually do anything with the trend information. It just simply downloads and prints out the list. But combine this code with the sample code from
    RSS Twitter Bot in Python and you’ll have a recipe for some seriously powerful promotion.

    import simplejson  # http://undefined.org/python/#simplejson
    import urllib
    
    result = simplejson.load(urllib.urlopen('http://search.twitter.com/trends.json'))
    
    print [trend['name'] for trend in result['trends']]