Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2234 | 12-06-2018 12:25 PM | |
2295 | 11-27-2018 06:00 PM | |
1782 | 11-22-2018 03:42 PM | |
2846 | 11-20-2018 02:00 PM | |
5160 | 11-19-2018 03:24 PM |
10-15-2017
12:34 PM
1 Kudo
@Yair Ogen, You can find a jdbc jar under ( /usr/hdp/current/hive-client/jdbc) folder. Thanks, Aditya
... View more
10-13-2017
12:29 PM
@Sen Ke, Can you please attach the gateway.log (/var/log/knox/gateway.log)
... View more
10-13-2017
12:25 PM
@Rupesh Agarwal, You have many ways to achieve this. Check this HCC link https://community.hortonworks.com/questions/46500/spark-cant-connect-to-hbase-using-kerberos-in-clus.html
... View more
10-13-2017
11:53 AM
Does the user have permissions to read the file. Check the permissions on the file or run it using sudo. ls -l /etc/security/keytabs/hbase.headless.keytab
... View more
10-13-2017
11:42 AM
@Rupesh Agarwal, <princ> is the user principal. Here is the example if you don't know what is the value of princ. [root@xxx ~]# klist -kte /etc/security/keytabs/hbase.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hbase.headless.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 09/10/17 14:06:14 hbase@EXAMPLE.COM (aes128-cts-hmac-sha1-96)
1 09/10/17 14:06:14 hbase@EXAMPLE.COM (arcfour-hmac)
1 09/10/17 14:06:14 hbase@EXAMPLE.COM (des-cbc-md5)
1 09/10/17 14:06:14 hbase@EXAMPLE.COM (des3-cbc-sha1)
1 09/10/17 14:06:14 hbase@EXAMPLE.COM (aes256-cts-hmac-sha1-96) Run klist and observer the output, here hbase@EXAMPLE.COM is the principal. Now you can run kinit as below kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase@EXAMPLE.COM
... View more
10-13-2017
11:28 AM
1 Kudo
@Rupesh Agarwal, Looks like your cluster is kerberized. You have to run kinit before running hbase shell. kinit -kt /etc/security/keytabs/hbase.headless.keytab <princ> Also, run your hbase shell in non interactive mode. echo "list" | hbase shell -n Thanks, Aditya
... View more
10-13-2017
10:38 AM
3 Kudos
1) Download the Ambari Integration module and HDFS transparency connector. You can get it here. 2) Stop all the services from Ambari. Ambari -> Actions -> Stop All 3) After successfully stopping all the services, Go to Spectrum Scale service and Unintegrate transparency. SpectrumScale -> Service Actions -> Unintegrate Transparency This step will replace the HDFS transparency modules with native HDFS and add back Secondary Namenode. 4) Delete the Spectrum Scale service. Type "delete" to confirm deletion. SpectrumScale -> Service Actions -> Delete Service 5) Extract the tar file downloaded in Step 1 in your Ambari Server node and run the mPack uninstaller. ./SpectrumScaleMPackUninstaller.py The uninstaller prompts for few values such as Ambari IP, username etc. Enter them. 6) Performing above steps will delete the service from Ambari. Restart Ambari server and Start all services. Ambari -> Actions -> Start All If you were using a existing ESS clusters, the nodes would be still part of the ESS cluster. Remove the nodes from ESS cluster as well. To remove the nodes, perform below steps. 1) On any of the ESS cluster node, run mmlscluster [root@xxx-ems1 ~]# mmlscluster
GPFS cluster information
========================
GPFS cluster name: xxx.yyy.net
GPFS cluster id: 123123123123132123123
GPFS UID domain: xxx.yyy.net
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
Node Daemon node name IP address Admin node name Designation
------------------------------------------------------------------------------------------------------------------------------------------
1 xxx-yyy1-hs.gpfs.net 10.10.99.11 xxx-yyy1-hs.gpfs.net quorum-manager-perfmon
2 xxx-yyy2-hs.gpfs.net 10.10.99.12 xxx-yyy2-hs.gpfs.net quorum-manager-perfmon
3 xxx-ems1-hs.gpfs.net 10.10.99.13 xxx-ems1-hs.gpfs.net quorum-perfmon
62 hdp-gpds-node-6.openstacklocal 10.101.76.62 hdp-gpds-node-6.openstacklocal
63 hdp-gpds-node-5.openstacklocal 10.101.76.40 hdp-gpds-node-5.openstacklocal
64 hdp-gpds-node-4.openstacklocal 10.101.76.60 hdp-gpds-node-4.openstacklocal This is the sample response for mmcluster commands. In the above output, Node 1,2 and 3 are the ESS nodes and Node 62,63 and 64 are the HDP cluster nodes which were enrolled into the ESS cluster. We will go ahead and delete the HDP nodes. 2) Delete the node by running the below command mmdelnode -N hdp-gpds-node-6.openstacklocal Run this command for every HDP node. After deleting all nodes you can confirm if the nodes are really deleted by running "mmlscluster" again. Hope this helps 🙂
... View more
10-13-2017
06:59 AM
@Neha G, Can you try the same command using kadmin instead of kadmin.local. Also can you please attach your /etc/krb5.conf file. Thanks, Aditya
... View more
10-13-2017
06:15 AM
@Ashikin, Try setting the below instead of PYSPARK_DRIVER_PYTHON export PYSPARK_PYTHON=<anaconda python path> ex: export PYSPARK_PYTHON=/home/ambari/anaconda3/bin/python
... View more
10-13-2017
06:14 AM
@Ashikin, Try setting the below instead of PYSPARK_DRIVER_PYTHON export PYSPARK_PYTHON=<anaconda python path> ex: export PYSPARK_PYTHON=/home/ambari/anaconda3/bin/python
... View more