SuckIT allows you to recursively visit and download a website's content to your disk.

suckit 0.1.0 CLI arguments

USAGE: suckit [FLAGS] [OPTIONS] (url)

FLAGS: -c, --continue-on-error Flag to enable or disable exit on error -h, --help Prints help information -V, --version Prints version information -v, --verbose Enable more information regarding the scraping process

OPTIONS: --delay (delay) Add a delay in seconds between downloads to reduce the likelihood of getting banned [default: 0] -d, --depth (depth) Maximum recursion depth to reach when visiting. -1 is the default and will go as far as it can [default: -1] -e, --exclude (exclude) Regex filter to exclude saving pages that match this expression [default: $^] -i, --include (include) Regex filter to limit to only saving pages that match this expression [default: .*] -j, --jobs (jobs) Maximum number of threads to use concurrently [default: 1] -o, --output (output) Output directory --random-range (random-range) Generate an extra random delay between downloads, from 0 to this number. This is added to the base delay seconds [default: 0] -t, --tries (tries) Maximum amount of retries on download failure [default: 20] -u, --user-agent (user-agent) User agent to be used for sending requests [default: suckit]

ARGS: (url) Entry point of the scraping

This is an unofficial package of the SuckIT source which can be found at Made using the snap configuration found at

Get it from the Snap Store

Search for another snap, or go back to the homepage.