BASH
Robust File Download with Curl and Retries
Master downloading files in Bash using 'curl', featuring progress, error handling, retries, and timeouts for resilient asset fetching in web development workflows.
#!/bin/bash\n\n# URL of the file to download (replace with a real URL for testing)\nFILE_URL="https://speed.hetzner.de/100MB.bin" \nDEST_PATH="/tmp/downloaded_file.bin"\n\n# Check if curl is installed\nif ! command -v curl &> /dev/null; then\n echo "Error: 'curl' command not found. Please install it."\n exit 1\nfi\n\necho "Attempting to download $FILE_URL to $DEST_PATH..."\n\n# Use curl with robust options:\n# -f: Fail silently (no output at all) on HTTP errors (4xx/5xx).\n# -s: Silent mode. Don't show progress meter or error messages (mostly).\n# -S: Show error. When used with -s, it makes curl show an error message if it fails.\n# -L: Follow HTTP redirects.\n# -o <file>: Write output to file instead of stdout.\n# --retry <num>: Retry up to <num> times if connection fails or transfer is incomplete.\n# --retry-delay <seconds>: Wait <seconds> between retries.\n# --connect-timeout <seconds>: Maximum time allowed for connection phase.\n# --max-time <seconds>: Maximum time allowed for the entire operation.\ncurl -fSsL --retry 5 --retry-delay 10 --connect-timeout 10 --max-time 60 -o "$DEST_PATH" "$FILE_URL"\n\n# Check the exit code of curl\nif [ $? -eq 0 ]; then\n echo "Download successful."\n # Verify the downloaded file (optional but recommended)\n if [ -f "$DEST_PATH" ]; then\n echo "File downloaded successfully to $DEST_PATH"\n # Add further processing here, e.g., unzip, verification of hash\n # unzip "$DEST_PATH"\n else\n echo "Error: Download reported success, but file not found at $DEST_PATH"\n exit 1\n fi\nelse\n echo "Error: Download failed after multiple retries. Check URL, network, or permissions."\n exit 1\nfi
How it works: This script uses `curl` to download a file robustly. It includes essential options like `-fSsL` for silent failure on HTTP errors, showing error messages, and following redirects. Crucially, it leverages `curl`'s built-in retry mechanism (`--retry 5 --retry-delay 10`) to handle transient network issues and timeouts (`--connect-timeout 10 --max-time 60`). After the download attempt, it checks `curl`'s exit code (`$?`) to determine success or failure, providing reliable file retrieval for dependencies, build assets, or configuration files in web development environments.