Webscraper Python

Posted on  by 



23 hours ago  It is a python web scraping library to make web scraping smart, automatic fast, and easy. It is lightweight as well it means it will not impact your PC much. A user can easily use this tool for data scraping because of its easy-to-use interface. To get started, you just need to type few lines of codes and you’ll see the magic. This is fantastic! I'm saving hours, possibly days. I was trying to scrap and old site, badly made, no proper divs or markup. Using the WebScraper magic, it somehow 'knew' the pattern after I selected 2 elements. Yes, it's a learning curve and you HAVE to watch the video and read the docs. QuickCode is the new name for the original ScraperWiki product. We renamed it, as it isn’t a wiki or just for scraping any more. It’s a Python and R data analysis environment, ideal for economists, statisticians and data managers who are new to coding.

  1. Web Scraper Python Code
  2. Web Scraper Python Code

Recently I come across a tool that takes care of many of the issues you usually face while scraping websites. The tool is called Scraper API which provides an easy to use REST API to scrape a different kind of websites(Simple, JS enabled, Captcha, etc) with quite an ease. Before I proceed further, allow me to introduce Scraper API.

What is Scraper API

If you visit their website you’d find their mission statement:

Scraper API handles proxies, browsers, and CAPTCHAs, so you can get the HTML from any web page with a simple API call!

As it suggests, it is offering you all the things to deal with the issues you usually come across while writing your scrapers.

Development

Library

Scraper API provides a REST API that can be consumed in any language. Since this post is related to Python so I’d be mainly focusing on requests library to use this tool.

You must first signup with them and in return, they will provide you an API KEY to use their platform. They provide 1000 free API calls which are enough to test their platform. Otherwise, they offer different plans from starter to the enterprise which you can view here.

Let’s try a simple example which is also giving in the documentation.

2
4
6
8
10
API_KEY='<YOUR API KEY>'
r=requests.get('http://api.scraperapi.com',params=payload,timeout=60)
print(r.text)

Assuming you are registered and have got an API which you can find on the dashboard, you can start working right away after having it. When you run this program it shows the IP address of your request.

Do you see, every time it returns a new IP address, cool, isn’t it?

Download activesync torrent. There are some scenarios where you would like to use the same proxy to give the impression that a single user is visiting a different part of the website. For that, you can pass session_number parameter in the payload variable above.

2
4
payload={'api_key':API_KEY,'url':URL_TO_SCRAPE,'session_number':'123'}
r=requests.get('http://api.scraperapi.com',params=payload,timeout=60)

And it’d produce the following result:

Can you notice the same proxy IP here?

Creating OLX Scrapper

Like previous scraping related posts, I am going to pick OLX again for this post. I will iterate the list first and then will scrape individual items. Below is the complete code.

Webscraper Python
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
payload={'api_key':API_KEY,'url':URL_TO_SCRAPE,'session_number':'123'}
r=requests.get('http://api.scraperapi.com',params=payload,timeout=60)
ifr.status_code200:
soup=BeautifulSoup(html,'lxml')
all_links.append('https://www.olx.com.pk'+l['href'])
idx=0
iflen(all_links)>0:
sleep(5)
payload={'api_key':API_KEY,'url':link,'session_number':'123'}
ifidx>1:
r=requests.get('http://api.scraperapi.com',params=payload,timeout=60)
ifr.status_code200:
soup=BeautifulSoup(html,'lxml')
price_section=soup.find('span',{'data-aut-id':'itemPrice'})

I am using Beautifulsoup to parse HTML. I have only extracted Price here because the purpose is to tell about the API itself than Beautifulsoup. You should see my post here in case you are new into scraping and Python.

Conclusion

In this post, you learned how to use Scraper API for scraping purposes. Whatever you can do with this API you can do it by other means as well; this API provides you everything under the umbrella, especially rendering of pages via Javascript for which you need headless browsers which, at times become cumbersome to set things up on remote machines for headless scraping. Scraper API is taking care of it and charging nominal charges for individuals and enterprises. The company I work with spend 100s of dollars on a monthly basis just for the proxy IPs.

Oh if you sign up here with my referral link or enter promo code adnan10, you will get a 10% discount on it. In case you do not get the discount then just let me know via email on my site and I’d sure help you out.

In the coming days, I’d be writing more posts about Scraper API discussing further features. World war twomr volkmars course pages.

Planning to write a book about Web Scraping in Python. Click here to give your feedback


  • Python Web Scraping Tutorial
  • Python Web Scraping Resources
  • Selected Reading

Web scraping, also called web data mining or web harvesting, is the process of constructing an agent which can extract, parse, download and organize useful information from the web automatically.

This tutorial will teach you various concepts of web scraping and makes you comfortable with scraping various types of websites and their data.

This tutorial will be useful for graduates, post graduates, and research students who either have an interest in this subject or have this subject as a part of their curriculum. The tutorial suits the learning needs of both a beginner or an advanced learner.

Web Scraper Python Code

Python

Web Scraper Python Code

The reader must have basic knowledge about HTML, CSS, and Java Script. He/she should also be aware about basic terminologies used in Web Technology along with Python programming concepts. If you do not have knowledge on these concepts, we suggest you to go through tutorials on these concepts first.





Coments are closed