Member since
04-18-2016
30
Posts
37
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3409 | 10-17-2017 06:56 PM | |
5651 | 10-17-2017 05:09 PM | |
1455 | 10-14-2016 06:14 PM |
03-15-2024
10:31 AM
@Hadoop16 Welcome to the Cloudera Community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. Thanks.
... View more
08-10-2023
09:48 PM
Try this option: [serviceaccount@edgenode ~]$ hdfs getconf -confKey dfs.nameservices hadoopcdhnn [serviceaccount@edgenode ~]$ hdfs getconf -confKey dfs.ha.namenodes.hadoopcdhnn namenode5605,namenode5456 [serviceaccount@edgenode ~]$ hdfs haadmin -getServiceState namenode5605 active [serviceaccount@edgenode ~]$ hdfs haadmin -getServiceState namenode5456 standby
... View more
10-25-2017
07:58 AM
Thanks a lot @Jay SenSharma for your help. Today when i was login to my AWS machine. it does not start all service and give me an error 8080:Port and Ambari Server issue. I just Go inot @nameNode and write command : Service Iptables Stop and both issue solves 🙂 🙂 Hope it will work smoothly. Thanks again
... View more
02-14-2018
09:45 AM
You can use the below snippet. But you need to run stack_advisor once using normal flow under ambari server /var/lib/ambari-server/resources/scripts/stack_advisor.py recommend-configurations /var/run/ambari-server/stack-recommendations/1/hosts.json /var/run/ambari-server/stack-recommendations/1/services.json
... View more
03-21-2017
06:47 AM
1 Kudo
@nshetty We can search for "security_type" = Kerberos from teh following API call: http://AMBARI_HOST:8080/api/v1/clusters/CLUSTERNAME?fields=Clusters/security_type
... View more
10-12-2016
06:26 PM
6 Kudos
Configuring HTTPFS To configure HTTPFS you can refer below post, it clearly explains the steps. We can install httpfs in any host where Hadoop clients are installed and we call it <HTTPFS_HOST>. HTTPFS - Configure and Run with HDP Create topology file In Knox Gateway host under /etc/knox/conf/topologies directory create a new topology file (copy default.xml) say example.xml After copying it looks like below [root@test-3 topologies]# ls -l /etc/knox/conf/topologies
total 20
-rw-r--r--. 1 knox knox 89 Oct 4 11:25 README
-rw-r--r--. 1 knox knox 4422 Oct 4 11:25 admin.xml
-rw-r--r--. 1 knox knox 3026 Oct 12 10:22 default.xml
-rw-r--r--. 1 knox knox 3026 Oct 12 10:35 example.xml
In example.xml topology file for WEBHDFS role change host to <HTTPFS_HOST> and port to 14000 . By default HTTPFS_HTTP_PORT is 14000 <service>
<role>WEBHDFS</role>
<url>http://<HTTPFS_HOST>:14000/webhdfs</url>
</service>
Try it out!! HTTPFS -> FileSystem curl -i -X GET 'http://<HTTPFS_HOST>:14000/webhdfs/v1?user.name=hdfs&op=GETHOMEDIRECTORY' Knox -> HTTPFS -> FileSystem curl -u guest:guest-password -ik -X GET 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1?op=GETHOMEDIRECTORY' Here example is topology name If above 2 commands gives response OK then your setup complete. CURL command for knox to access HDFS via HttpFS Below are curl commands for few HDFS operations. 1. Make Directory curl -u guest:guest-password -ik -X PUT 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase?op=MKDIRS' 2. Copy file from local filesystem into HDFS curl -u guest:guest-password -ik -T input.txt -L -H "Content-Type: application/octet-stream" -X PUT 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file?op=CREATE&user.name=guest' 3. Print the contents of file curl -u guest:guest-password -ik 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file?op=OPEN' 4. Lists the directory Statuse's curl -u guest:guest-password -ik 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase?op=LISTSTATUS' 5. Display's the content summary curl -u guest:guest-password -ik 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file?op==GETCONTENTSUMMARY' 6. Display's the file checksum curl -u guest:guest-password -ik 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file?op==GETFILECHECKSUM' 7. Display's the file status curl -u guest:guest-password -ik 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file?op=GETFILESTATUS' 8. Rename's the file in HDFS curl -u guest:guest-password -ik -X PUT 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file?op=RENAME&destination=/tmp/TestKnoxHDFS/testFullUseCase/data_file_new' 9. Change the replication factor of a file curl -u guest:guest-password -ik -X PUT 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file_new?SETREPLICATION&replication=1' 10. Change the permission of a file curl -u guest:guest-password -ik -X PUT 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file_new?op=SETPERMISSION&permission=777' 11. Change the modification time of a file curl -u guest:guest-password -ik -X PUT 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase/data_file_new?op=SETTIMES&modificationtime=1475865881000' 12. Delete a directory curl -u guest:guest-password -ik -X DELETE 'https://<KNOX_GATEWAY_HOST>:8443/gateway/example/webhdfs/v1/tmp/TestKnoxHDFS/testFullUseCase?op=DELETE&recursive=true'
... View more
08-31-2016
06:24 AM
Hi @Narasimhan Kazhiyur Yes above change should work and also you should format the new name directory
... View more
08-11-2016
01:25 PM
2 Kudos
Got the logs from @nshetty and checked somehow the hbase:acl table is not created because earlier hbase:acl znode is present but the table is not present. Once after deleting the hbase:acl table znode and restarting the service it's working fine.
... View more
09-19-2017
07:19 AM
4 Kudos
From HDP-2.6 onwards Hortonworks Data Platform is supported on IBM Power Systems You can refer below documentation for installing/upgrading HDP on IBM Power https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-installation-ppc/content/ch_Getting_Ready.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-upgrade-ppc/content/ambari_upgrade_guide-ppc.html
... View more