Member since
07-14-2017
12
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3513 | 02-14-2018 07:15 PM |
12-21-2018
04:31 AM
1 Kudo
Install AWS CLI from the steps provided in the below link https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html Verify AWS CLI installation $aws --version Configure AWS credentials $aws configure Download Ozone 0.3.0-alpha tarball from here, untar it. Go to the $PWD/ozone-0.3.0-aplha/compose/ozones3 directory, and start the server:
docker-compose up -d Create alias command alias ozones3api='aws s3api --endpoint http://localhost:9878' Create bucket: $ ozones3api create-bucket --bucket documents Put objects to bucket: $ ozones3api put-object --bucket documents --key S3Doc --body ./S3.md
$ ozones3api put-object --bucket documents --key hddsDoc --body ./Hdds.md$ ozones3api put-object --bucket documents --key javaDoc --body ./JavaApi.md List objects in a bucket: $ ozones3api list-objects --bucket documents
{"Contents": [{"LastModified": "2018-11-02T21:57:40.875Z","ETag": "1541195860875","StorageClass": "STANDARD","Key": "hddsDoc","Size": 2845},{"LastModified": "2018-11-02T22:36:23.358Z","ETag": "1541198183358","StorageClass": "STANDARD","Key": "javaDoc","Size": 5615},{"LastModified": "2018-11-02T21:56:47.370Z","ETag": "1541195807370","StorageClass": "STANDARD","Key": "s3doc","Size": 1780}]} Get Object from a Bucket: $ ozones3api get-object --bucket documents --key hddsDoc /tmp/hddsDoc
{"ContentType": "application/octet-stream","ContentLength": 2845,"Expires": "Fri, 02 Nov 2018 22:39:00 GMT","CacheControl": "no-cache","Metadata": {}} Head Bucket: $ ozones3api head-bucket --bucket documents Head Object: $ ozones3api head-object --bucket documents --key hddsDoc
{"ContentType": "binary/octet-stream","LastModified": "Fri, 2 Nov 2018 21:57:40 GMT","ContentLength": 2845,"Expires": "Fri, 02 Nov 2018 22:41:55 GMT","ETag": "1541195860875","CacheControl": "no-cache","Metadata": {}} Copy Object: This is used to create a copy object which already exists in Ozone. Suppose, we want to take a backup of keys in documents bucket in to a new bucket, we can use this. 1. Create a destination bucket. $ ozones3api create-bucket --bucket documentsbackup
{"Location": "http://localhost:9878/documentsbackup"} 2. Copy object from source to destination bucket $ ozones3api copy-object --bucket documentsbackup --key s3doc --copy-source documents/s3doc
{"CopyObjectResult": {"LastModified": "2018-11-02T22:49:20.061Z","ETag": "21df0aee-26a9-464c-9a81-620f7cd1fc13"}} 3. List objects in destination bucket. $ ozones3api list-objects --bucket documentsbackup
{"Contents": [{"LastModified": "2018-11-02T22:49:20.061Z","ETag": "1541198960061","StorageClass": "STANDARD","Key": "s3doc","Size": 1780}]} Delete Object: We have 2 ways to delete.
Delete one object at a time Delete multiple objects at a time. Ozone over S3 supports both of them. Delete Object: $ ozones3api delete-object --bucket documents --key hddsDoc Multi Delete: $ ozones3api delete-objects --bucket documents --delete 'Objects=[{Key=javaDoc},{Key=s3Doc}]'
{"Deleted": [{"Key": "javaDoc"},{"Key": "s3Doc"}]}
... View more
Labels:
11-28-2017
09:00 PM
2 Kudos
This article helps to perform distcp between 2 clusters.
Here each cluster is Kerbeorized with a different KDC server. And the cross-realm trust is setup between the two MIT KDC servers.
Follow this blog to setup Kerberos cross realm trust setup:
https://community.hortonworks.com/articles/18686/kerberos-cross-realm-trust-for-distcp.html Once above setup is completed, proceed further.
Add the below property to mapred-site.xml and restart all affected components. <property>
<name>mapreduce.job.send-token-conf</name>
<value>yarn.http.policy|^yarn.timeline-service.webapp.*$|^yarn.timelineservice.client.*$|hadoop.security.key.provider.path|hadoop.rpc.protection|dfs.nameservices|^dfs.namenode.rpcaddress.*$|^dfs.ha.namenodes.*$|^dfs.client.failover.proxy.provider.*$|dfs.namenode.kerberos.principal|dfs.namenode.kerberos.principal.pattern
</value>
Assuming here 2 clusters one is cluster1 and other is cluster2
Now run the hadoop distcp as below from cluster 1 as below: $hadoop distcp -Ddfs.client.failover.proxy.provider.cluster2=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider -Ddfs.namenode.rpc-address.cluster.nn2=<<nnrpcaddress>> -Ddfs.namenode.rpc-address.cluster2.nn1=<<nnrpcaddress>> -Ddfs.ha.namenodes.cluster2=nn1,nn2 -Ddfs.nameservices=cluster1,cluster2 hdfs://cluster1/tmp/test hdfs://cluster2/tmp/test
... View more
11-21-2017
04:58 PM
@Michael Bronson m -rf -> This is a Linux/Unix based command which will only delete your Unix/Lrinux based directory created in Unix/Linux file system. Whereas hdfs dfs -rmr /DirectoryPath -> Is for deletion of files/dirs in HDFS filesystem. Incase I miss interpreted your question then and you mean to ask me what is difference between "hdfs dfs -rmr" and "hdfs dfs -rm -rf" then the later one doesn't exist as there is no "-f" parameter to rm command in HDFS filesystem. We only have "-r" as an option for rm command in HDFS to delete the dir and files.
... View more
12-02-2017
11:04 PM
thank you for the answer , but we create another new worker machine instead that machine , I think it was waste of time find the problem on that machine and better to create a new one
... View more