Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2825 | 04-27-2020 03:48 AM | |
| 5492 | 04-26-2020 06:18 PM | |
| 4674 | 04-26-2020 06:05 PM | |
| 3707 | 04-13-2020 08:53 PM | |
| 5609 | 03-31-2020 02:10 AM |
09-04-2019
05:30 PM
@tyadav Every service provides a "metainfo.xml" file which is used by ambari to determine the <dependencies> between individual service/components. For example in case of HIVE service you can see a "metainfo.xml" file in the following location. /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml
/var/lib/ambari-server/resources/stacks/HDP/3.0/services/HIVE/metainfo.xml You can find the <dependencies> tab inside this file which talks about the component dependencies. For example you can see the "HIVE_SERVER" dependencies as following: <component>
<name>HIVE_SERVER</name>
<displayName>HiveServer2</displayName>
<category>MASTER</category>
<cardinality>1</cardinality>
<versionAdvertised>true</versionAdvertised>
<reassignAllowed>true</reassignAllowed>
<clientsToUpdateConfigs></clientsToUpdateConfigs>
<dependencies>
<dependency>
<name>ZOOKEEPER/ZOOKEEPER_SERVER</name>
<scope>cluster</scope>
<auto-deploy>
<enabled>true</enabled>
<co-locate>HIVE/HIVE_SERVER</co-locate>
</auto-deploy>
</dependency>
<dependency>
<name>YARN/YARN_CLIENT</name>
<scope>host</scope>
<auto-deploy>
<enabled>true</enabled>
</auto-deploy>
</dependency>
<dependency>
<name>MAPREDUCE2/MAPREDUCE2_CLIENT</name>
<scope>host</scope>
<auto-deploy>
<enabled>true</enabled>
</auto-deploy>
</dependency>
</dependencies> . Similarly you can also find the dependencies using the following kind of Ambari API call: # curl -sk -u admin:admin -H "X-Requested-By: ambari" -X GET http://$AMBARI_HOST:8080/api/v1/stacks/HDP/versions/3.1/services/HIVE | grep -A5 'required_services'
"required_services" : [
"ZOOKEEPER",
"HDFS",
"YARN",
"TEZ"
], . . If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-04-2019
02:47 AM
@tyadav Some of the hive tables that are created in HDFS might not be accessible if the HDFS is down. Hive Service has a dependency to HDFS/YARN/MapReduce. So are you able to start the HDFS/YARN/MapReduce service first? Also If HiveMetastore service is not coming up then can you please share the HiveMetastore log , including the Operational Log from Ambari UI where we can see the failure cause ?
... View more
09-04-2019
01:27 AM
@sonalidive786 Good to know that it has resolved your issue. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-03-2019
01:05 AM
@yukti As you have already got the 3.1.0.0-78 version of JARs so for the missing class "org.apache.hadoop.hive.kafka.KafkaSerDe" can you try using the following JAR (instead of "hive-serde-0.10.0.jar'")? In My HDP 3 installation on hive host i can see it here " usr/hdp/current/hive-server2/lib/kafka-handler-3.1.0.3.1.0.0-78.jar" # /usr/jdk64/jdk1.8.0_112/bin/javap -cp /usr/hdp/current/hive-server2/lib/kafka-handler-3.1.0.3.1.0.0-78.jar org.apache.hadoop.hive.kafka.KafkaSerDe
Compiled from "KafkaSerDe.java"
public class org.apache.hadoop.hive.kafka.KafkaSerDe extends org.apache.hadoop.hive.serde2.AbstractSerDe {
public org.apache.hadoop.hive.kafka.KafkaSerDe();
public void initialize(org.apache.hadoop.conf.Configuration, java.util.Properties) throws org.apache.hadoop.hive.serde2.SerDeException;
public java.lang.Class<? extends org.apache.hadoop.io.Writable> getSerializedClass();
public org.apache.hadoop.io.Writable serialize(java.lang.Object, org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector) throws org.apache.hadoop.hive.serde2.SerDeException;
public org.apache.hadoop.hive.serde2.SerDeStats getSerDeStats();
public java.lang.Object deserialize(org.apache.hadoop.io.Writable) throws org.apache.hadoop.hive.serde2.SerDeException;
java.util.ArrayList<java.lang.Object> deserializeKWritable(org.apache.hadoop.hive.kafka.KafkaWritable) throws org.apache.hadoop.hive.serde2.SerDeException;
public org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector getObjectInspector();
static {};
}
... View more
09-03-2019
12:48 AM
@sonalidive786 Can you please try to restart the AmbariServer . and then followed by Ambari Agent and Metrics Monitor processes on those hosts and then refresh the Ambari UI in Incognito mode again.
... View more
09-02-2019
10:48 PM
1 Kudo
@Manoj690 As your error is as following: Caused by: java.sql.SQLException: Access denied for user 'ambari1'@'gaian-lap386.com' (using password: YES) hence you will need to make sure to add the Grant as following for the 'ambari1'@'gaian-lap386.com' user in MySQL DB so that it can connect to mysql from the host where you are running the ambari server. Example: # mysql -u root -p
Enter Password: <YOUR_PASSWORD>
mysql> use mysql;
mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari1'@'gaian-lap386.com' IDENTIFIED BY 'XXXXXXXXXX' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari1'@'%' IDENTIFIED BY 'XXXXXXXXXX' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES; In the above command please replace the 'XXXXXXXXXX' with your ambari1 DB user's password. If you still face the issue then please share the output of the following command mysql> SELECT user, host FROM user; . More details about GRANT and Permissions in Ambari DB can be found in the following link: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_using_ambari_with_mysql_or_mariadb.html
... View more
09-02-2019
04:58 AM
@CoPen If you are getting exactly the same error then you might be doing the same mistake again. Can you please share the complete log again with the new error. Please see this which we noticed earlier: OError: Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
ERROR 2019-09-02 17:01:47,471 Controller.py:213 - Error:Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) Notice the URL: "https://192.168.56.101 master master.hadoop.com:8441" it contains IP Address (192.168.56.101), ambari server hostname alias (master) and hostname (master.hadoop.com) all thgether. So definitely the URL is incorrect thats why you see the mentioned error. Also as we see "CERTIFICATE_VERIFY_FAILED] certificate verify failed" error hence you must also see the Article https://community.cloudera.com/t5/Community-Articles/Java-Python-Updates-and-Ambari-Agent-TLS-Settings/ta-p/248328 However, the more serious issue is that you are using 3 generation Old Ambari Server (2.4.2) Is there any specific reason that you are using so OLD ambari server version (2.4.2) ? The latest version of ambari is 2.7.4
... View more
09-02-2019
03:25 AM
1 Kudo
@yukti Good to know that your originally reported issue is resolved. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. . . For the new error, It will be best of you open a separate Topic/Thread that way user of this thread will not get confused with different errors. The new error also looks like related to incorrect JAR version. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:java.lang.ClassNotFoundException Class org.apache.hadoop.hive.kafka.KafkaSerDe not found)
... View more
09-02-2019
02:51 AM
@tyadav Also it will be great to start monitoring the free memory of the Sandbox host where all the components are running so that we will get better idea why the services are starting slow and taking almost 25 minutes time to start/fail to successfully start. If you still find any failure in starting the selected services then please share the logs of those individual service components which are failing to start and also the "free -m" command output during the startup operation. . . If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-02-2019
02:41 AM
@tyadav HDP Sandbox is a learning setup which has a single node cluster with lots of services deployed to it. So usually when you say "Start All Services" then it will attempt to start all the services that are not put into maintenance ... Which will create much load on the VM. So ideally it will be best to just STOP the services that you are currently not using and then put them in Maintenance Mode. Then try to start the required services. The Just start the services that you actually need for testing that way the load on the Sandbox host will be much less and you will find that operations will be executing much faster and you will also see some free memory on the sandbox host.
... View more