Handle large files correctly
I never made downloads of files >= 2^31 or >=2^32 - we should set up a test script in the contrib/ directory and for Travis. Having this in 'make check' would take too long, I guess.
Wget2 currently downloads into memory - and saves the file to disk only if requested. It it parses files (HTML, CSS, robots.txt, sitemaps on -r) in memory. I see two points that have to be addressed:
- files not to be parsed could go directly (or with a small memory caching) to disk.
- files to be parsed should have an upper size to avoid DOS attacks, e.g. if a CSS file becomes > 10MB, we save the file to disk without parsing. WDYT ?