← Return to main page

→ How to scrape SERP and websites?

3 methods for 3 different locations

June 25, 2025

1. Scrape SERP

Use case:
Extract search engine result pages (SERPs) for a given set of keywords — including titles, URLs, meta descriptions, featured snippets, People Also Ask, and other SERP features — to analyse competitors, spot trends, or monitor rankings.

Tools/Methods:

  • Use Python + SerpAPI / Scrape Google with BeautifulSoup + Selenium/Playwright
  • Use SEO tools' APIs (e.g., SEMrush, Ahrefs, SISTRIX) for structured access
  • Export data to Google Sheets or CSV for further analysis

2. Scrape Page Elements through Screaming Frog

Use case:
Extract specific HTML elements (e.g., H1s, meta tags, schema, structured data, Open Graph tags, product info, or custom CSS selectors) from a website's pages to audit or aggregate content.

Tools/Methods:

  • Screaming Frog Custom Extraction (Configuration → Custom → Extraction tab)
  • Use XPath or CSSPath to extract elements

3. Scrape Website Database

Use case:
Extract structured data stored in a web application's backend (e.g., product listings, booking systems, or filterable search results) that dynamically loads content via JavaScript/AJAX — especially when there's no accessible API.

Tools/Methods:

  • Use Browser automation tools like Selenium or Playwright to render pages and extract JavaScript-loaded content
  • Use network inspector in Chrome DevTools to reverse-engineer API endpoints and access JSON data directly
  • In cases of permission or ToS restrictions: confirm that scraping is compliant before proceeding

...