Member since
01-25-2016
345
Posts
86
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4996 | 10-20-2017 06:39 PM | |
3524 | 03-30-2017 06:03 AM | |
2584 | 02-16-2017 04:55 PM | |
16094 | 02-01-2017 04:38 PM | |
1141 | 01-24-2017 08:36 PM |
06-02-2022
06:59 PM
have u tried move out (or delete the folder for that partition ) from hdfs, then run: msck repair table <tablename>
... View more
10-11-2021
08:28 AM
1 Kudo
To resolve the issue, import the Ambari certificates to the Ambari truststore. To import the Ambari certificates, do the following:
STEP 1:
Get certificate from ambari-server
echo | openssl s_client -showcerts -connect <AMBARI_HOst>:<AMBARI_HTTPs_PORT> 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/ambari_certificate.cr
STEP 2:
Get path of ambari trustore and truststore password from Ambari properties
cat /etc/ambari-server/conf/ambari.properties |grep truststore
As per your ambari.properties below is the path and password :-
ssl.trustStore.password=refer from ambari.property file
ssl.trustStore.path=/etc/ambari-server/conf/ambari-server-truststore
STEP 3:
keytool -importcert -file /tmp/ambari_certificate.crt -keystore <keystore-path>
STEP 4:
ambari-server restart
... View more
01-26-2020
11:18 AM
This worked! I already made these changes prior to running the last command. hdp-select status hadoop-client Set a couple of parameters export HADOOP_OPTS="-Dhdp.version=2.6.1.0-129” export HADOOP_CONF_DIR=/etc/hadoop/conf Source-in the environment source ~/get_env.sh Included last two lines to $SPARK_HOME/conf/spark-defaults.conf spark.driver.extraJavaOptions -Dhdp.version=2.6.1.0-129 spark.yarn.am.extraJavaOptions -Dhdp.version=2.6.1.0-129 Added Hadoop version under Ambari / Yarn / Advanced / Custom: hdp.version=2.6.1.0-129 Ensure this runs okay yarn jar hadoop-mapreduce-examples.jar pi 5 5 Run spark pi example under yarn cd /home/spark/spark-2.4.4-bin-hadoop2.7 spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 2G --num-executors 5 --executor-cores 2 --conf spark.authenticate.enableSaslEncryption=true --conf spark.network.sasl.serverAlwaysEncrypt=true --conf spark.authenticate=true examples/jars/spark-examples_2.11-2.4.4.jar 100
... View more
01-07-2020
09:16 AM
How did you resolve the issue ?
... View more
01-02-2020
07:08 AM
Hi,
I need to uninstall Ranger/Ranger KMS 1.2.0, how can I do it?
Thank,
Neelagandan K
... View more
09-26-2019
12:03 AM
I also faced the same issue.Found the issue was with mysql-connector-java.jar. I followed the below steps 1. Check whether you are able to connect remotely to mysql database. 2.If you are able to connect , then , its mysql-connector-java.jar in ambari 3. Download the correct version of mysql jar from https://dev.mysql.com/downloads/connector/j/ 4. Stop ambari server . 5.Remove the mysql connector jar from ambari 6. Set up again using ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/ mysql-connector-java-8.0.16
... View more
06-10-2018
06:11 PM
@sunile.manjee Thank you:)
... View more
11-07-2017
12:38 AM
Thanks. Glad to know that it helped.
... View more
10-25-2017
03:41 AM
@PJ Yeah, it might be that case.Because if you are having large number of records then it will take a lot of time to convert ORC data to csv format and if you compare these two process executing query with insert overwrite directory will perform much faster with no issues and also we can keep what ever delimiter we need and we don't need to worry about size of the data.
... View more
10-21-2017
05:48 AM
ORC is best option with in hive and Parquet is best option across Hadoop ecosystem.
... View more