Learning Selenium – Part 1

I had been though accessing web services and extracting information with python. Later, I desired that if I can extract information then there must be a way to to embed some information to the website. For a robot or script it becomes difficult, as most of todays websites automatically detects robots or scripts and voids them using up there resources. But how about if a browser is automated to fill forms or click links on its own as if a user is surfing the web. With this wishes I paved my way to the holy and the supreme deity “Google”. And then I came across so called ‘ Selenium ‘ module which satiated me. Its available to use in many languages and is compatible with many browsers. And yeah, as I always work with things that are well documented, this module too has got pretty much detailed documentation and hell lot of community support.

Its actually a need of the day when you end with lack of resources such as unavailability of web service APIs, IMAP lib support for websites, or hardware modules like GSM , GPS. You can actually see clicking links, filling up a form, downloading a file right in front of your eyes with you relaxing on your chair and script working hard to do all that for you. I spawned up with great idea which actually reduced my burden of buying a GSM module to get connected with SMS service for communication. That is when I have my internet connection to my raspberry pi [ I often use it my small projects] , then I can let web service providing free SMS service to get my task done rather then using a module costing more than 1500 INR. Now these websites are user password protected and also have captcha authentication. Moreover, no API available. Here, Selenium can actually take over this website by downloading the captcha image and retreiving the text by OCR and also filling out user input information on website.

The selenium module lets Python directly control the browser by automation, clicking links and filling in login information, almost as though there is a human user interacting with the page. Selenium allows you to interact with web pages in a much more advanced way than Requests and Beautiful Soup; but because it launches a web browser, it is a bit slower and hard to run in the background if, say, you just need to download some files from the Web.
Incase you haven’t come across Request or Beautiful Soup, then here is a brief about them : Requests module lets you typically do a GET request to a url or download a file in other sense. Beautiful Soup is mere an HTML parser analogous to XML parser for XML files.

I have recently been through Selenium and presently I am still learning more about it each day. At tutorials section I will be constantly updating and editing post to add-on important stuff and features as I will go through them myself. I will be starting with setting up selenium, then simple codes for automating browser and finally a final project involving hell lot of automation to you browser and computer GUI as well.

I prefer you to follow this documentation for selenium python.

Setting up Selenium in Linux : Using Chrome as web driver

I am using Ubuntu 14.04 and chrome as my daily web browser. Its totally cool for selenium even if you work with different OS and web browser, most of the part of code remains same. For linux users:

$ sudo pip install selenium

This will install all necessary modules for selenium. Incase you don’t have pip installed onto you system , then :

$ sudo apt-get install python-pip

However, above commands end up with error like connection refused or permission denied. This problem is most frequent for those who are using proxy to avail web. An addition of ‘-E’ after “sudo” will take care of this as it directs system proxy to fetch pip.

Writing first Selenium code: Opening a url into browser

You will come across many websites with code like:

from selenium import webdriver

browser=webdriver.Chrome()
browser.get("http://shubhagrawal.in")

Its prett much simple and you just need to import the right module and direct correct functions and paramters. Apparently this code must work, but there’s a catch . It will most of the times end up with error like

selenium.common.exceptions.WebDriverException: Message: ‘Can not connect to the ChromeDriver’

I took more than a couple of hours googling this error and ended up with a negative voted solution on stack overflow. It stated to get in sync the version of selenium and chrome driver. I went through documentation and followed that latest version of selenium must correspond with chromedriver version 2.20. Hence, I updated my selenium using:

sudo pip install -U selenium

And then downloaded the chromedriver 2.20 from here. The file must be extracted and then must be converted to executable using

chmod +x chromedriver

Now go edit your first code for selenium in this way:

from selenium import webdriver
import os

chromedriver="/path/to/chromedriver"
os.environ["webdriver.chrome.driver"]=chromedriver
browser=webdriver.Chrome(chromedriver)

browser.get("http://shubhagrawal.in")

Give the path as required wherever your executable for chromedriver was created. Now execute this file, and voila, you see chrome poping out with my homepage loading up. Isn’t that great?

Next tutorial, I will be explaining to make your own social networking site crawler and general method of writing a selenium code.

2 Responses to “Learning Selenium – Part 1

  • Abinash
    8 years ago

    If you are going to run webdriver on raspberry pi and the sites that you deal with do not use any dynamic content(like javascript on the front-end) it would be better to use ‘requests’ and ‘beautiful-soup’ over selenium. Why use something resource heavy unless you are sure it can’t be done without it, right?

    • shubh
      8 years ago

      Yes @Abinash, its absolutely true. Urlib, Beautiful soup, are waay better than selenium, for web scraping and crawling in raspberry pi and other small processors. But one advantage that selenium can give is, it can automate the browser the way you want to visualize plus can take care of robot security checks more like of an actual user. But from my point of view, urllib wins !

%d bloggers like this: