Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2442 | 04-27-2020 03:48 AM | |
4874 | 04-26-2020 06:18 PM | |
3975 | 04-26-2020 06:05 PM | |
3212 | 04-13-2020 08:53 PM | |
4920 | 03-31-2020 02:10 AM |
09-04-2019
01:26 AM
@Manoj690 We see the failure is because of the following cause: ERROR tool.ImportTool: Import failed: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://gaian-lap386.com:8020/user/gaian/emp already exists You already have the output directory in HDFS "hdfs://gaian-lap386.com:8020/user/gaian/emp" existing hence it is failing. You can easily check it as # su - hdfs -c "hdfs dfs -ls /user/gaian/emp" So either specify a different table / target dir OR specify the following property in your SQOOP command. "--delete-target-dir Delete the import target directory if it exists" Please refer to the mentioned doc to know what it actually does and if it is feasible for you or not? OR you can try defining a different "--target-dir" (to define another HDFS destination dir) in the command line. https://sqoop.apache.org/docs/1.4.4/SqoopUserGuide.html
... View more
09-03-2019
01:05 AM
@yukti As you have already got the 3.1.0.0-78 version of JARs so for the missing class "org.apache.hadoop.hive.kafka.KafkaSerDe" can you try using the following JAR (instead of "hive-serde-0.10.0.jar'")? In My HDP 3 installation on hive host i can see it here " usr/hdp/current/hive-server2/lib/kafka-handler-3.1.0.3.1.0.0-78.jar" # /usr/jdk64/jdk1.8.0_112/bin/javap -cp /usr/hdp/current/hive-server2/lib/kafka-handler-3.1.0.3.1.0.0-78.jar org.apache.hadoop.hive.kafka.KafkaSerDe
Compiled from "KafkaSerDe.java"
public class org.apache.hadoop.hive.kafka.KafkaSerDe extends org.apache.hadoop.hive.serde2.AbstractSerDe {
public org.apache.hadoop.hive.kafka.KafkaSerDe();
public void initialize(org.apache.hadoop.conf.Configuration, java.util.Properties) throws org.apache.hadoop.hive.serde2.SerDeException;
public java.lang.Class<? extends org.apache.hadoop.io.Writable> getSerializedClass();
public org.apache.hadoop.io.Writable serialize(java.lang.Object, org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector) throws org.apache.hadoop.hive.serde2.SerDeException;
public org.apache.hadoop.hive.serde2.SerDeStats getSerDeStats();
public java.lang.Object deserialize(org.apache.hadoop.io.Writable) throws org.apache.hadoop.hive.serde2.SerDeException;
java.util.ArrayList<java.lang.Object> deserializeKWritable(org.apache.hadoop.hive.kafka.KafkaWritable) throws org.apache.hadoop.hive.serde2.SerDeException;
public org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector getObjectInspector();
static {};
}
... View more
09-03-2019
12:48 AM
@sonalidive786 Can you please try to restart the AmbariServer . and then followed by Ambari Agent and Metrics Monitor processes on those hosts and then refresh the Ambari UI in Incognito mode again.
... View more
09-02-2019
10:52 PM
@Manoj690 I think for Ambari Server issue you have already raised a separate topic: https://community.cloudera.com/t5/Support-Questions/ambari-server-issue-its-not-running/m-p/269593#M206953 The current thread is for Sqoop command failure.
... View more
09-02-2019
10:48 PM
1 Kudo
@Manoj690 As your error is as following: Caused by: java.sql.SQLException: Access denied for user 'ambari1'@'gaian-lap386.com' (using password: YES) hence you will need to make sure to add the Grant as following for the 'ambari1'@'gaian-lap386.com' user in MySQL DB so that it can connect to mysql from the host where you are running the ambari server. Example: # mysql -u root -p
Enter Password: <YOUR_PASSWORD>
mysql> use mysql;
mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari1'@'gaian-lap386.com' IDENTIFIED BY 'XXXXXXXXXX' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO 'ambari1'@'%' IDENTIFIED BY 'XXXXXXXXXX' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES; In the above command please replace the 'XXXXXXXXXX' with your ambari1 DB user's password. If you still face the issue then please share the output of the following command mysql> SELECT user, host FROM user; . More details about GRANT and Permissions in Ambari DB can be found in the following link: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_using_ambari_with_mysql_or_mariadb.html
... View more
09-02-2019
10:45 PM
@Manoj690 Is this issue resolved ? If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-02-2019
10:00 PM
@yvettew One thing is for sure here is that Ambari is not giving 502 bad gateway error here. Ambari server runs on Jetty Server (Not on nginx) So definitely the error which you are getting in your browser is not coming from ambari server. It mist be coming from some nginx server as we see 502 Bad Gateway nginx/1.15.7 . Whatever output was generated from the curl command can you share that same. Even if it is blank output can you please share the screenshot of the error or the browser debugger console output ? # curl -iLv -u admin:admin -H "X-Requested-By: ambari" -X GET http://$AMBARI_HOSTNAME:8080/api/v1/check .
... View more
09-02-2019
04:58 AM
@CoPen If you are getting exactly the same error then you might be doing the same mistake again. Can you please share the complete log again with the new error. Please see this which we noticed earlier: OError: Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
ERROR 2019-09-02 17:01:47,471 Controller.py:213 - Error:Request to <a href="<a href="https://192.168.56.101" target="_blank">https://192.168.56.101</a>" target="_blank"><a href="https://192.168.56.101</a" target="_blank">https://192.168.56.101</a</a>> master master.hadoop.com:8441/agent/v1/register/master failed due to <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) Notice the URL: "https://192.168.56.101 master master.hadoop.com:8441" it contains IP Address (192.168.56.101), ambari server hostname alias (master) and hostname (master.hadoop.com) all thgether. So definitely the URL is incorrect thats why you see the mentioned error. Also as we see "CERTIFICATE_VERIFY_FAILED] certificate verify failed" error hence you must also see the Article https://community.cloudera.com/t5/Community-Articles/Java-Python-Updates-and-Ambari-Agent-TLS-Settings/ta-p/248328 However, the more serious issue is that you are using 3 generation Old Ambari Server (2.4.2) Is there any specific reason that you are using so OLD ambari server version (2.4.2) ? The latest version of ambari is 2.7.4
... View more
09-02-2019
03:25 AM
1 Kudo
@yukti Good to know that your originally reported issue is resolved. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. . . For the new error, It will be best of you open a separate Topic/Thread that way user of this thread will not get confused with different errors. The new error also looks like related to incorrect JAR version. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:java.lang.ClassNotFoundException Class org.apache.hadoop.hive.kafka.KafkaSerDe not found)
... View more
09-02-2019
02:51 AM
@tyadav Also it will be great to start monitoring the free memory of the Sandbox host where all the components are running so that we will get better idea why the services are starting slow and taking almost 25 minutes time to start/fail to successfully start. If you still find any failure in starting the selected services then please share the logs of those individual service components which are failing to start and also the "free -m" command output during the startup operation. . . If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more