Member since
07-14-2017
12
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3513 | 02-14-2018 07:15 PM |
12-21-2018
04:31 AM
1 Kudo
Install AWS CLI from the steps provided in the below link https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html Verify AWS CLI installation $aws --version Configure AWS credentials $aws configure Download Ozone 0.3.0-alpha tarball from here, untar it. Go to the $PWD/ozone-0.3.0-aplha/compose/ozones3 directory, and start the server:
docker-compose up -d Create alias command alias ozones3api='aws s3api --endpoint http://localhost:9878' Create bucket: $ ozones3api create-bucket --bucket documents Put objects to bucket: $ ozones3api put-object --bucket documents --key S3Doc --body ./S3.md
$ ozones3api put-object --bucket documents --key hddsDoc --body ./Hdds.md$ ozones3api put-object --bucket documents --key javaDoc --body ./JavaApi.md List objects in a bucket: $ ozones3api list-objects --bucket documents
{"Contents": [{"LastModified": "2018-11-02T21:57:40.875Z","ETag": "1541195860875","StorageClass": "STANDARD","Key": "hddsDoc","Size": 2845},{"LastModified": "2018-11-02T22:36:23.358Z","ETag": "1541198183358","StorageClass": "STANDARD","Key": "javaDoc","Size": 5615},{"LastModified": "2018-11-02T21:56:47.370Z","ETag": "1541195807370","StorageClass": "STANDARD","Key": "s3doc","Size": 1780}]} Get Object from a Bucket: $ ozones3api get-object --bucket documents --key hddsDoc /tmp/hddsDoc
{"ContentType": "application/octet-stream","ContentLength": 2845,"Expires": "Fri, 02 Nov 2018 22:39:00 GMT","CacheControl": "no-cache","Metadata": {}} Head Bucket: $ ozones3api head-bucket --bucket documents Head Object: $ ozones3api head-object --bucket documents --key hddsDoc
{"ContentType": "binary/octet-stream","LastModified": "Fri, 2 Nov 2018 21:57:40 GMT","ContentLength": 2845,"Expires": "Fri, 02 Nov 2018 22:41:55 GMT","ETag": "1541195860875","CacheControl": "no-cache","Metadata": {}} Copy Object: This is used to create a copy object which already exists in Ozone. Suppose, we want to take a backup of keys in documents bucket in to a new bucket, we can use this. 1. Create a destination bucket. $ ozones3api create-bucket --bucket documentsbackup
{"Location": "http://localhost:9878/documentsbackup"} 2. Copy object from source to destination bucket $ ozones3api copy-object --bucket documentsbackup --key s3doc --copy-source documents/s3doc
{"CopyObjectResult": {"LastModified": "2018-11-02T22:49:20.061Z","ETag": "21df0aee-26a9-464c-9a81-620f7cd1fc13"}} 3. List objects in destination bucket. $ ozones3api list-objects --bucket documentsbackup
{"Contents": [{"LastModified": "2018-11-02T22:49:20.061Z","ETag": "1541198960061","StorageClass": "STANDARD","Key": "s3doc","Size": 1780}]} Delete Object: We have 2 ways to delete.
Delete one object at a time Delete multiple objects at a time. Ozone over S3 supports both of them. Delete Object: $ ozones3api delete-object --bucket documents --key hddsDoc Multi Delete: $ ozones3api delete-objects --bucket documents --delete 'Objects=[{Key=javaDoc},{Key=s3Doc}]'
{"Deleted": [{"Key": "javaDoc"},{"Key": "s3Doc"}]}
... View more
Labels:
02-14-2018
07:15 PM
Hi @Mark HBase Indexer provides the feature, you are looking for. Below is the documentation link https://doc.lucidworks.com/lucidworks-hdpsearch/2.6/Guide-Jobs.html#_hbase-indexer And also there is a HCC Blog with example for this. https://community.hortonworks.com/articles/1181/hbase-indexing-to-solr-with-hdp-search-in-hdp-23.html
... View more
11-28-2017
09:00 PM
2 Kudos
This article helps to perform distcp between 2 clusters.
Here each cluster is Kerbeorized with a different KDC server. And the cross-realm trust is setup between the two MIT KDC servers.
Follow this blog to setup Kerberos cross realm trust setup:
https://community.hortonworks.com/articles/18686/kerberos-cross-realm-trust-for-distcp.html Once above setup is completed, proceed further.
Add the below property to mapred-site.xml and restart all affected components. <property>
<name>mapreduce.job.send-token-conf</name>
<value>yarn.http.policy|^yarn.timeline-service.webapp.*$|^yarn.timelineservice.client.*$|hadoop.security.key.provider.path|hadoop.rpc.protection|dfs.nameservices|^dfs.namenode.rpcaddress.*$|^dfs.ha.namenodes.*$|^dfs.client.failover.proxy.provider.*$|dfs.namenode.kerberos.principal|dfs.namenode.kerberos.principal.pattern
</value>
Assuming here 2 clusters one is cluster1 and other is cluster2
Now run the hadoop distcp as below from cluster 1 as below: $hadoop distcp -Ddfs.client.failover.proxy.provider.cluster2=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider -Ddfs.namenode.rpc-address.cluster.nn2=<<nnrpcaddress>> -Ddfs.namenode.rpc-address.cluster2.nn1=<<nnrpcaddress>> -Ddfs.ha.namenodes.cluster2=nn1,nn2 -Ddfs.nameservices=cluster1,cluster2 hdfs://cluster1/tmp/test hdfs://cluster2/tmp/test
... View more
11-22-2017
06:19 PM
Can you check /wrk/sdd/hadoop/hdfs/data/current/BP-2098469986-197.14.28.53-1497173237387 and /wrk/sde/hadoop/hdfs/data/current/BP-2098469986-197.14.28.53-1497173237387 are present on your new worker node?
... View more
11-21-2017
09:55 PM
@Veerendra Nath Try out with SASL_PLAINTEXT. If you are using open source Kafka version not HDP Kafka, you need to use below mentioned values. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
... View more
11-21-2017
08:17 PM
1 Kudo
If you use hdfs dfs -rm -r it will delete the files from hdfs cluster. It affects HDFS cluster, not a particular host.
... View more
11-21-2017
07:56 PM
@Michael Bronson hdfs rm -r will delete the path you have provided recursively. The specified location will be deleted from hdfs cluster. So, that means it is deleted from entire hdfs cluster. If trash option is enabled, it will move the deleted files to trash directory. For more info, you can see the rm command usage https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/FileSystemShell.html#rm The above link is for Hadoop 2.7.3 version.
... View more
11-21-2017
01:17 AM
@Michael Bronson I assumed when you mentioned rm -rf, you mean to delete datanode data directories. When you use normal delete to delete the datanode directories, the block data for files will be deleted, and the replication factor for those blocks will be reduced by 1. And they remain as under replicated blocks if replication factor has been set to greater than 1.
... View more
11-21-2017
12:53 AM
I mean to say here clusterId. Could you please provide complete log of datanode?
... View more
11-20-2017
11:51 PM
I think what is happening here is, the clusterId of datanode and namenode is not matching here. Check the version file on the namenode and datanode. For namenode version file will be present in <<dfs.namenode.name.dir>>/current/VERSION For datanode version file will be present in <<dfs.datanode.data.dir>>/current/VERSION Both should be same, to start hdfs cluster.
... View more