Member since
04-30-2019
53
Posts
5
Kudos Received
1
Solution
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4516 | 01-08-2020 07:09 AM |
05-30-2021
01:17 AM
[hdfs@c****-node* hive-testbench-hive14]$ ./tpcds-build.sh Building TPC-DS Data Generator make: Nothing to be done for `all’. TPC-DS Data Generator built, you can now use tpcds-setup.sh to generate data. [hdfs@c4237-node2 hive-testbench-hive14]$ ./tpcds-setup.sh 2 TPC-DS text data generation complete. Loading text data into external tables. make: *** [time_dim] Error 1 make: *** Waiting for unfinished jobs.... make: *** [date_dim] Error 1 Data loaded into database tpcds_bin_partitioned_orc_2. INFO : OK +---------------------+ | database_name | +---------------------+ | default | | information_schema | | sys | +---------------------+ 3 rows selected (1.955 seconds) 0: jdbc:hive2://c4237-node2.coelab.cloudera.c> tpcds_bin_partitioned_orc_2 database is not created, I have some issues in testing the tpcds queries sudo -u hdfs -s 13 cd /home/hdfs 14 wget https://github.com/hortonworks/hive-testbench/archive/hive14.zip 15 unzip hive14.zip 17 export JAVA_HOME=/usr/jdk64/jdk1.8.0_77 18 export PATH=$JAVA_HOME/bin:$PATH ./tpcds-build.sh beeline -i testbench.settings -u "jdbc:hive2://c****-node9.coe***.*****.com:10500/tpcds_bin_partitioned_orc_2" I'm not able to test the tpcds queries, any help would be appreciated.
... View more
08-24-2020
07:07 AM
https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/ambari-release-notes/content/known_issues.html
... View more
04-23-2020
03:51 AM
@TR7_BRYLE Did we restart the agents after making the changes? can you attach the error stack trace.
... View more
04-20-2020
03:21 AM
1 Kudo
@TR7_BRYLE This issue occurs because the Java is restricting the TLSv1 from (1.8.0-171) used by the Ambari Agents. By default, ambari-agent connects to TLSv1, unless specified by force_https_protocol=PROTOCOL_TLSv1_2 in ambari-agent.ini. Hence, the Ambari- Agent is not able to connect and communicate to Ambari server. To resolve this issue, add the following property in ambari-agent.ini [/etc/ambari-agent/conf/ambari-agent.ini] file under [security] and restart ambari-agent. force_https_protocol=PROTOCOL_TLSv1_2 https://community.cloudera.com/t5/Support-Questions/ambari-agents-cannot-reach-ambari-server-after-changing/td-p/193251 Restart the ambari agents Thanks, Prakash
... View more
02-21-2020
01:06 AM
@Folks You can follow this approach for any other kerberos keytab related issues with LLAP.
... View more
02-20-2020
02:09 AM
LLAP Service sometimes may fail to startup after HDP upgrade with the following error:
Error:: Caused by: org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: hive/prakash.cloudera.com@CLOUDERA.COM from keytab /grid/0/yarn/local/usercache/hive/appcache/application_**********/container_***********_01_000001/keytabs/llap0.keytab javax.security.auth.login.LoginException: Pre-authentication information was invalid (24)
To resolve this issue, run the following command and start LLAP:
yarn app -destroy llap0
If the issue persists, then do the following
Check the hive.service.keytab file is available in all the hosts, run kinit and test the connectivity.
If the file is missing, recreate the keytabs on the missing hosts either through Ambari or manually (see this article for manual keytab creation).
Verify the file in HDFS and copy it to the local. Perform kinit with the copied keytab.
hdfs:///user/hive/.yarn/keytabs/hive/hive.service.keytab
hdfs dfs -copyToLocal /user/hive/.yarn/keytabs/hive/hive.service.keytab /tmp/hive.keytab
kinit -kt /tmp/hive.keytab hive/prakash.cloudera.com@CLOUDERA.COM
kinit: Preauthentication failed while getting initial credentials
If you get the error, to move the keytab file from hdfs location.
hdfs dfs -cp /user/hive/.yarn/keytabs/hive/hive.service.keytab /tmp/
hdfs dfs -rm /user/hive/.yarn/keytabs/hive/hive.service.keytab
Start LLAP component.
... View more
01-08-2020
07:09 AM
@pratik_ I think some configuration parameters have to be taken at Hive side. https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/securing-hive/content/hive_secure_hiveserver_using_ldap.html https://community.cloudera.com/t5/Community-Articles/Hive-and-LDAP-integration/ta-p/245449 Please refer these articles and make sure all the required parameters are configured properly.
... View more
12-24-2019
02:53 AM
@ypc812164921 You need to have a support subscription to create a technical case with Cloudera, please check with your account team regarding the subscription details. Once you have the valid subscription and member of a support team then you can create the cases.
... View more
07-05-2019
12:33 PM
1 Kudo
@Michael Bronson Use this query in ambari db and verify the XML data whether all the component versions are updated correctly. select version_xml,repo_version_id from repo_version where repo_version_id in (select distinct(desired_repo_version_id) from servicedesiredstate); In case if you find any inconsistency in the version then you need to update the correct version.
... View more
02-05-2019
12:21 PM
1 Kudo
STOP Command:: curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT http://172.26.78.29:8080/api/v1/clusters/Mycluster/hosts/prakash-ambariagent-node3/host_components/FLUME_HANDLER -d '{"RequestInfo":{"context":"Stop Flume","operation_level":{"level":"HOST_COMPONENT","cluster_name":"Mycluster","host_name":"prakash-ambariagent-node3","service_name":"FLUME"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}' START Command::
curl -u admin:admin -H "X-Requested-By:ambari" -i -X PUT http://172.26.78.29:8080/api/v1/clusters/Mycluster/hosts/prakash-ambariagent-node3/host_components/FLUME_HANDLER -d '{"RequestInfo":{"context":"Start Flume","operation_level":{"level":"HOST_COMPONENT","cluster_name":"Mycluster","host_name":"prakash-ambariagent-node3","service_name":"FLUME"}},"Body":{"HostRoles":{"state":"STARTED"}}}' RESTART Command:: curl --insecure -v -u admin:admin -H "X-Requested-By:ambari" -i -X POST http://prakash-ambariserver-node1:8080/api/v1/clusters/Mycluster/requests -d '{"RequestInfo":{"command":"RESTART","context":"Restart all components for Flume","operation_level":{"level":"SERVICE","cluster_name":"Mycluster","service_name":"FLUME"}},"Requests/resource_filters":[{"service_name":"FLUME","component_name":"FLUME_HANDLER","hosts":"prakash-ambariagent-node3"}]}' Note::Replace the ambari server host and clustername with your cluster details and execute the curl commands.
... View more
Labels: