Hello,
I am trying to move data from hadoop cluster to aws s3 bukcet using
but even a file of size 4MB takes a lot a time and i am unable to understand the bottle neck here.
I am running the below command aws s3 cp <source_folder> s3://<path>/ --recursive
The file size is 4MB and the upload speed is 50-60 kib/sec.
I talked to aws side as well according to them the issue maybe due to networking is the client end ie the cloundera hadoop cluster
Can anyone help understand how can i move my data from hadoop hdfs to aws s3 efficiently ?