Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2024 | 04-27-2020 03:48 AM | |
4009 | 04-26-2020 06:18 PM | |
3231 | 04-26-2020 06:05 PM | |
2593 | 04-13-2020 08:53 PM | |
3847 | 03-31-2020 02:10 AM |
02-23-2020
04:25 PM
@Prabhu_Muppala Can you please try to specify the the "--driver" param in your Sqoop as following: --driver oracle.jdbc.driver.OracleDriver Also can you pleas everify if the DB credentials are correctly being entered and DB is accessbile on localhost 1521? # netstat -tnlpa | grep 1521 Also this user "hr" has enough privileges to list tables in Oracle DB?
... View more
02-20-2020
08:29 PM
2 Kudos
@mike_bronson7 As we see the error as: "Table Namespace Manager not fully initialized" 2020-02-21 03:33:49,284 INFO org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=15, retries=35, started=629725 ms ago, cancelled=false, msg=java.io.IOException: Table Namespace Manager not fully initialized, try again later
at org.apache.hadoop.hbase.master.HMaster.checkNamespaceManagerReady(HMaster.java:2693)
at org.apache.hadoop.hbase.master.HMaster.ensureNamespaceExists(HMaster.java:2915)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1686) Which indicates the AMS HBase master might have some issue. Can you please let us know when was "distributed Mode" AMS running fine last time? Or immediately after enabling AMS distributed mode it is not starting? Is it kerberos enabled environment? Can you please check the permission on the HDFS dir? (to verify if the ownership of this HDFS dir is setup correctly as "ams:hdfs") # su - hdfs -c 'hdfs dfs -ls /user/ams'
# su - hdfs -c 'hdfs dfs -ls /user/ams/hbase' If you still face any issue then may be you can try to change the Zookeeper ZNode for AMS and then try restarting AMS freshly. To change the "Zookeeper Znode Parent" property of AMS please try this. Ambari UI --> Amabri Metrics --> Configs --> "Advanced ams-hbase-site" --> "ZooKeeper Znode Parent" then change the value of the znode to something slightly different like "/ams-hbase-unsecure" to "/ams-hbase-unsecure1" ...etc and restart AMS and let us know if you see any error?
... View more
02-20-2020
08:08 PM
@Ravirakunapu Can you check the "/etc/krb5.conf" file present on the host which is showing the error "kdc host is not reachable on port 88" ? Checks from Client machine: Then verify what is the hostname for KDC mentioned in this file? suppose kdc hostname is "kdc.example.com" then check if you are able to access that hhostname & port from the problematic machine? # telnet kdc.example.com 88
(OR)
# nc -v kdc.example.com 88 . Also please verify if the "/etc/hosts" file is mapped to correct hostname& port for the kdc.example.com? # cat /etc/hosts . On the KDC host Check on the KDC side is the port 88 is listening and iptables/firewall is disabled? # netstat -tnlpa | grep $PID_Of_KDC
# service iptables status
# systemctl status firewalld
... View more
02-20-2020
07:28 PM
2 Kudos
@mike_bronson7 As the Metrics Service operation mode is already selected to "distributed" hence Ambari will make AMS aware that it needs to find that hbase.rootdir on HDFS. Following should be fine. hbase.rootdir=/user/ams/hbase
... View more
02-20-2020
02:47 PM
2 Kudos
@mike_bronson7 This does not look right? Ideally with HDFS HA name we do not use the Port number because "hdfsha" is not a hostname but just a logical name. hbase.rootdir=hdfs://hdfsha:8020/user/ams/hbase If you NameService name is "hdfsha" (defined in "Custom core-site" as "dfs.nameservices=hdfsha") then ideally you should be using the following in your AMS configuration in your "Advanced ams-hbase-site" hbase.rootdir=/user/ams/hbase . As your AMS mode is "distributed" hence AMS will automatically assume that the data is in HDFS and will be able to figure out the actual NameService name dynamically so we do not even need to specify "hdfs://hdfsha" there. After fixing the "hbase.rootdir" in AMS configs please kill and restart the AMS processes. Then specially check the AMS logs specially the following logs and please share if you notice any error ... please share the Full stacktrace. /var/log/ambari-metrics-collector/hbase-ams-master-*.log
/var/log/ambari-metrics-collector/hbase-ams-region-*.log
/var/log/ambari-metrics-collector/ambari-metrics-collector.log .
... View more
02-18-2020
10:19 PM
1 Kudo
@mark-gg As the Empty Base URL issue mentioned on the other thread which you referred as "AMBARI-25069" is already resolved in Ambari 2.7.4 and later so is it not a good idea to first upgrade Ambari to 2.7.4 (or Ambari 2.7.5 which is latest) and then try to register the desired version. Ambari Upgrade Guide: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.4.0/bk_ambari-upgrade-major/content/upgrade_ambari.html
... View more
02-17-2020
04:02 AM
@sharathkumar13 You can use the Cloudera Manager API call something like following to start/stop Kafka Service (or any desired service) In order to stop "kafka" service: # curl -iLv -u admin:admin -X POST --header 'Accept: application/json' 'ttp://dc-1.example.com:7180/api/v40/clusters/TestCluster/services/kafka/commands/stop' In order to start "kafka" Service # curl -iLv -u admin:admin -X POST --header 'Accept: application/json' 'ttp://dc-1.example.com:7180/api/v40/clusters/TestCluster/services/kafka/commands/start' . Please also take a look at the Cloudera Manager Swagger APIs. Cloudera Manager(CM) 6.0 introduces new Python API client cm_client based on Swagger. This new API client supports all CM API versions. https://cloudera.github.io/cm_api/docs/python-client-swagger/ . Here please replace the CM credentials and cluster name in CM host port in the above mentioned API calls.
... View more
02-17-2020
03:26 AM
1 Kudo
@hicha Which product and version (HDP /CDH ....etc) are you using and from where have you downloaded the "incubator-livy" ?
... View more
02-16-2020
10:32 PM
@Kureikana Can you try this Suppose you start your infra solr process as "infra-solr" user then try the following command. Non Kerberos Env: # su - infra-solr
# source /etc/ambari-infra-solr/conf/infra-solr-env.sh
# /usr/lib/ambari-infra-solr/bin/solr start -cloud -noprompt -s /var/lib/ambari-infra-solr/data 2>&1 Kerberos Env: # su - infra-solr
# kinit -kt /etc/security/keytabs/ambari-infra-solr.service.keytab <AMBARI_INFRA_PRINCIPAL>
# source /etc/ambari-infra-solr/conf/infra-solr-env.sh
# /usr/lib/ambari-infra-solr/bin/solr start -cloud -noprompt -s /var/lib/ambari-infra-solr/data -Dsolr.kerberos.name.rules='DEFAULT' 2>&1 .
... View more
02-15-2020
03:38 PM
1 Kudo
@hicha Not Sure which product you are using. However based on the command it looks like you are running the "livy-server" script from outside of the "bin" directory which may be causing the issue Because the Logic written inside the script "" is as following: export LIVY_HOME=$(cd $(dirname $0)/.. && pwd)
.
.
.
start_livy_server() {
LIBDIR="$LIVY_HOME/jars"
if [ ! -d "$LIBDIR" ]; then
LIBDIR="$LIVY_HOME/server/target/jars"
fi
if [ ! -d "$LIBDIR" ]; then
echo "Could not find Livy jars directory." 1>&2
exit 1
fi . So ideally you should first change directory to "bin" where the 'livy-server' script is present and then run it as following OR try using the Full path in the. terminal for the 'livy-server' script # cd /PATH/TO/LIVY_DIR/bin
# ./livy-server
(OR)
# /PATH/TO/LIVY_DIR/bin/livy-server . If you still find it difficult to run then try to put a "echo" statement as following inside the script "livy-server' to see what path it is resolving for the 'LIVY_HOME' and if the "LIVY_HOME/jars" directory exist with correct permission?? export LIVY_HOME=$(cd $(dirname $0)/.. && pwd)
echo "LIVY_HOME calculated as = $LIVY_HOME" .
... View more