I recently found myself at the wrong end of a crappy internet connection and needing to download a 97MB file. Safari and Chrome would both choke during the download and leave me with a unusable, truncated 4MB file, with no way to resume the download. The OSX command-line download utility, curl, can resume downloads, but I had to check manually each time the download was interrupted and re-run the command.
After typing "up, enter" more times than I care to admit, I decided to figure out the automatic way. Here's my bash one-liner to get automatic download resume in curl:
export ec=18; while [ $ec -eq 18 ]; do /usr/bin/curl -O -C - "http://www.example.com/a-big-archive.zip"; export ec=$?; done
Explanation: the exit code curl chucks when a download is interrupted is 18, and $? gives you the exit code of the last command in bash. So, while the exit code is 18, keep trying to download the file, maintaining the filename (-O) and resuming where the previous download left off (-C).
Update: As Jan points out in the comments, depending what is going wrong with the download, your error code may be different. Just change "18" to whatever error you're seeing to get it to work for you! (If you're feeling adventurous, you could change the condition to
while [ $ec -ne 0 ], but that feels like using a bare except in Python, which is bad. ;)