BASH
Robust File Download with Progress and Retries
Implement robust file downloads in Bash scripts using `curl` with progress indicators, timeout handling, and automatic retries for reliable fetching of external resources.
#!/bin/bash
URL="https://speed.hetzner.de/100MB.bin" # Example URL for a large file
OUTPUT_FILE="downloaded_file.bin"
MAX_RETRIES=5
RETRY_DELAY=10 # seconds
echo "Attempting to download $URL to $OUTPUT_FILE"
for (( i=1; i<=MAX_RETRIES; i++ )); do
echo "Download attempt $i of $MAX_RETRIES..."
# Use curl for downloading:
# -L: Follow redirects
# -f: Fail silently on HTTP errors (4xx/5xx)
# -o: Output file
# -# or --progress-bar: Show a progress bar
# --retry: Handled manually for custom delay
# --connect-timeout: Maximum time allowed for connection
# --max-time: Maximum time allowed for the transfer
if curl -L -f -o "$OUTPUT_FILE" --progress-bar --connect-timeout 10 --max-time 30 "$URL"; then
echo "Download successful!"
break
else
ERROR_CODE=$?
echo "Download failed with curl exit code: $ERROR_CODE"
if [ $i -lt $MAX_RETRIES ]; then
echo "Retrying in $RETRY_DELAY seconds..."
sleep "$RETRY_DELAY"
else
echo "Maximum retries reached. Download failed after $MAX_RETRIES attempts."
exit 1
fi
fi
done
if [ -f "$OUTPUT_FILE" ]; then
echo "Downloaded file size: $(du -h "$OUTPUT_FILE" | awk '{print $1}')"
# Optional: Verify checksum here (refer to previous snippet)
rm "$OUTPUT_FILE"
echo "Cleaned up $OUTPUT_FILE"
fi
How it works: This script demonstrates how to perform robust file downloads using `curl` with features like following redirects, showing a progress bar, handling connection timeouts, and implementing custom retry logic with a delay. The `for` loop iterates through a specified number of retries, attempting the download until successful or the maximum retry limit is reached. This ensures reliability when fetching external resources that might be temporarily unavailable or experience network issues, which is common in automated deployment or setup scripts.