Member since
08-17-2017
2
Posts
0
Kudos Received
0
Solutions
08-17-2017
01:43 PM
Hi @pavan p, The file1 will remain to have block size as 128MB only. If you want to change the block size of file1 to 64MB , you can use Hadoop Copy command as below , which will create a copy of file1 to file3 with block size mentioned in dfs.blocksize property. $ hadoop fs -cp /user/ubuntu/file1 /user/ubuntu/file3 Also you can specify the blocksize parameter in the command as below: $ hadoop fs -D dfs.blocksize=xx -cp /user/ubuntu/file1 /user/ubuntu/file3 The only thing you need to do is , manually delete the old copy of the file if it is no longer needed !
... View more
07-08-2017
06:21 PM
Hi @Venu Shanmukappa You can also use Hadoop 'cp' command after following the below steps : 1)Configure the core-site.xml file with following aws property : <property> <name>fs.s3n.awsAccessKeyId</name> <value>AWS access key ID. Omit for Role-based authentication.</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name> <value>WS secret key. Omit for Role-based authentication.</value> </property> 2) Export the JAR (aws-java-sdk-1.7.4.jar ) file provided by AWS in environment variable HADOOP_CLASSPATH using below command. $ export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_HOME/share/hadoop/tools/lib/* 3)The hadoop "cp" command will copy source data (Local Hdfs) to Destination (AWS S3 bucket) . $ hadoop fs -cp /user/ubuntu/filename.txt s3n://S3-Bucket-Name/filename.txt
... View more