Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2487 | 04-27-2020 03:48 AM | |
| 4959 | 04-26-2020 06:18 PM | |
| 4042 | 04-26-2020 06:05 PM | |
| 3277 | 04-13-2020 08:53 PM | |
| 4991 | 03-31-2020 02:10 AM |
09-05-2019
01:11 AM
@jsensharma thanks. That helps.
... View more
09-05-2019
12:28 AM
Oh.. Thanks, I upgrade my ambari version and my hostname master with master.hadoop.com togerther!
... View more
09-04-2019
01:27 AM
@sonalidive786 Good to know that it has resolved your issue. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-04-2019
01:26 AM
@Manoj690 We see the failure is because of the following cause: ERROR tool.ImportTool: Import failed: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://gaian-lap386.com:8020/user/gaian/emp already exists You already have the output directory in HDFS "hdfs://gaian-lap386.com:8020/user/gaian/emp" existing hence it is failing. You can easily check it as # su - hdfs -c "hdfs dfs -ls /user/gaian/emp" So either specify a different table / target dir OR specify the following property in your SQOOP command. "--delete-target-dir Delete the import target directory if it exists" Please refer to the mentioned doc to know what it actually does and if it is feasible for you or not? OR you can try defining a different "--target-dir" (to define another HDFS destination dir) in the command line. https://sqoop.apache.org/docs/1.4.4/SqoopUserGuide.html
... View more
09-03-2019
01:05 AM
@yukti As you have already got the 3.1.0.0-78 version of JARs so for the missing class "org.apache.hadoop.hive.kafka.KafkaSerDe" can you try using the following JAR (instead of "hive-serde-0.10.0.jar'")? In My HDP 3 installation on hive host i can see it here " usr/hdp/current/hive-server2/lib/kafka-handler-3.1.0.3.1.0.0-78.jar" # /usr/jdk64/jdk1.8.0_112/bin/javap -cp /usr/hdp/current/hive-server2/lib/kafka-handler-3.1.0.3.1.0.0-78.jar org.apache.hadoop.hive.kafka.KafkaSerDe
Compiled from "KafkaSerDe.java"
public class org.apache.hadoop.hive.kafka.KafkaSerDe extends org.apache.hadoop.hive.serde2.AbstractSerDe {
public org.apache.hadoop.hive.kafka.KafkaSerDe();
public void initialize(org.apache.hadoop.conf.Configuration, java.util.Properties) throws org.apache.hadoop.hive.serde2.SerDeException;
public java.lang.Class<? extends org.apache.hadoop.io.Writable> getSerializedClass();
public org.apache.hadoop.io.Writable serialize(java.lang.Object, org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector) throws org.apache.hadoop.hive.serde2.SerDeException;
public org.apache.hadoop.hive.serde2.SerDeStats getSerDeStats();
public java.lang.Object deserialize(org.apache.hadoop.io.Writable) throws org.apache.hadoop.hive.serde2.SerDeException;
java.util.ArrayList<java.lang.Object> deserializeKWritable(org.apache.hadoop.hive.kafka.KafkaWritable) throws org.apache.hadoop.hive.serde2.SerDeException;
public org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector getObjectInspector();
static {};
}
... View more
09-02-2019
06:42 AM
Hi, Have you find a solution ? I have the same problem with my production cluster. HDFS HA is implemented from several years without problem. But recently, we realized that a hdfs client have to wait 20 secondes when the server hosting the nn1 is shutdown. Exemple when I set the debug mode : 19/08/29 11:03:05 DEBUG ipc.Client: Connecting to XXXXX/XXXXX:8020 19/08/29 11:03:23 DEBUG ipc.Client: Failed to connect to server: XXXXX/XXXXX:8020: try once and fail. java.net.NoRouteToHostException: No route to host Few informations : Hadoop version : 2.7.3 dfs.client.retry.policy.enabled : false dfs.client.failover.sleep.max.millis : 15000 ipc.client.connect.timeout : 20000 Thanks for your help !
... View more
09-02-2019
02:51 AM
@tyadav Also it will be great to start monitoring the free memory of the Sandbox host where all the components are running so that we will get better idea why the services are starting slow and taking almost 25 minutes time to start/fail to successfully start. If you still find any failure in starting the selected services then please share the logs of those individual service components which are failing to start and also the "free -m" command output during the startup operation. . . If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-02-2019
01:49 AM
Oh, I see now. There seems to be problem on the server side when rewriting the docs.hortonworks.com domain to docs.cloudera.com, so the /HDPDocuments appears twice in the URL. Once I'm on the docs.cloudera.com domain, the links are working once again...
... View more