What ZynU Amazon Scraper does
ZAS reads ASIN lists from plain text files, opens each Amazon product page in a real Chrome browser (using undetected-chromedriver to avoid bot detection), extracts the full product record, and writes it to three output formats simultaneously. Progress is tracked in SQLite, so you can stop and resume at any time without re-scraping what you already have.
It runs entirely on your own machine — no cloud, no API quotas, no monthly fee. Configure your Amazon Associates affiliate tag once in settings, and every exported URL is automatically tagged and ready to share.
What's inside the download
Loads ASINs from text files, opens each Amazon product page via undetected ChromeDriver, extracts all product fields, and saves incrementally to CSV + JSON + SQLite after every product — no data loss on crash.
Dark-themed GUI with sidebar navigation, live log output, non-blocking QThread workers, and a settings panel for all scraper parameters — no command-line knowledge required.
One-command Python installer for Linux, macOS, and Windows. Detects OS, creates a virtualenv, installs all packages, checks for Chrome/Chromium, and generates a platform-specific launcher and desktop shortcut.
Alternative Bash installer for Ubuntu, Debian, Mint, Arch, and Fedora. Handles APT/DNF/Pacman dependencies, creates a .desktop entry, and adds a symlink to ~/.local/bin.
Double-click .bat launcher or PowerShell script for Windows. Checks Python on PATH, creates venv, installs requirements, and places a .lnk shortcut on the Desktop.
Drop one or more .txt files into the asin/ folder — each filename becomes the product category label. One ASIN per line. The scraper loads all files on startup and tracks progress per file.
From ASIN list to data export
Place .txt files in the asin/ folder — one ASIN per line. The filename (without .txt) becomes the product category in the output. Multiple files can be loaded at once. Already-scraped ASINs are automatically skipped via the SQLite database.
For each ASIN, the scraper opens Amazon's homepage first, then navigates to the product URL — mimicking real user behaviour. Random delays (5–10s), human-like page scrolling, and browser restart every 100 products reduce detection probability.
Title, price, star rating, review count, brand, category, main image (HD), gallery images, product specifications table, variant options, and affiliate-tagged URL are extracted from the product page DOM.
Every successfully scraped product is immediately written to amazon_databank.csv (Excel-friendly), amazon_databank.json (API-ready), and amazon_databank.db (SQLite for querying). Saves happen per product — a crash never loses more than one record.
The scraper runs in configurable batches (default 500 ASINs). After each batch it pauses for a cooldown period (default 30 min). A daily limit (default 2,000) protects your IP. All counters reset at midnight.
Fields exported for every product
Every scraped product is saved with the following fields across all three output formats (CSV, JSON, SQLite). Structured fields like gallery images, product specs, and variants are stored as JSON strings in CSV and as native arrays/objects in JSON.
Install in one command
Download the zip, extract it, and run the installer for your platform. The installer handles everything: virtual environment creation, Python package installation, Chrome detection, launcher script, and desktop shortcut.
Prerequisites
bash install.shbash install.shpython3 install.pyinstall.batpowershell -File install.ps1python install.pyLaunch manually after install
bash run_gui.shrun_gui.batSettings you can tune in the GUI
All parameters are configurable from the Settings panel inside the app. They are saved to scraper_gui_config.json in the project directory and persist between sessions.
Common issues and fixes
- PySide6 install fails on Linux
Install the required system libraries first: sudo apt install python3-dev libgl1 libglib2.0-0 — then re-run the installer.
- Chrome not detected on Linux
Download and install the google-chrome-stable .deb package from dl.google.com, then run sudo apt-get install -f to resolve dependencies.
- "python is not recognized" on Windows
Reinstall Python from python.org and check the ✅ "Add Python to PATH" checkbox during setup. Restart your terminal afterward.
- Permission denied on macOS (install.sh)
Run chmod +x install.sh first, then bash install.sh. macOS may also show a Gatekeeper warning — open System Settings › Privacy & Security to allow it.
- Scraper hits CAPTCHA immediately
Increase delay_min / delay_max in settings and reduce batch_size. Consider running at off-peak hours and restarting your router to rotate your IP.
- Products show no price
Some Amazon pages dynamically load pricing via JavaScript. The scraper scrolls the page and waits, but heavily dynamic listings may return empty. These are logged to scraper_debug.log.
Start scraping Amazon products today
Free, open-source, and runs entirely on your own hardware. No API key, no monthly fee, no data leaving your machine.