Why to use iter_content and chunk_size in python requests -


why should use iter_content , specially i'm confused purpose using of chunk_size , have tried using , in every way file seems saved after downloading successfully.

g = requests.get(url, stream=true)  open('c:/users/andriken/desktop/tiger.jpg', 'wb') sav:     chunk in g.iter_content(chunk_size=1000000):         print (chunk)         sav.write(chunk) 

help me understand use of iter_content , happen see using 1000000 bytes chunk_size, purpose , results?

this prevent loading entire response memory @ once (it allows implement concurrency while stream response can work while waiting request finish).

the purpose of setting streaming request media. try download 500 mb .mp4 file using requests, want stream response (and write stream in chunks of chunk_size) instead of waiting 500mb loaded python @ once.

if want implement ui feedback (such download progress "downloaded <chunk_size> bytes..."), need stream , chunk. if response contains content-size header, can calculate % completion on every chunk save too.


Comments

Popular posts from this blog

ios - MKAnnotationView layer is not of expected type: MKLayer -

ZeroMQ on Windows, with Qt Creator -

unity3d - Unity SceneManager.LoadScene quits application -