Member since
05-19-2016
216
Posts
20
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4195 | 05-29-2018 11:56 PM | |
7032 | 07-06-2017 02:50 AM | |
3770 | 10-09-2016 12:51 AM | |
3543 | 05-13-2016 04:17 AM |
10-09-2018
09:04 AM
1 Kudo
Setting the cron job will take this particular error away but eventually, you are bound to run into a lot of other issues. Feel free to try though. Also, let me know your experience after trying that 🙂
... View more
10-09-2018
08:59 AM
1 Kudo
sad. Basically, hostname gets reset on network restart that happens every few hours in GCP hence the issue. GCP support suggested us to reset it using cron but that completely f**ed up our server communication. Suggest you to get in touch with GCP support since Cloudera or any other entity won't be able to help with this as far as I know
... View more
10-09-2018
08:41 AM
It happened to us when we ported our servers from AWS to GCP using cloud endure. Created GCP VM instance directly and it got fixed.
... View more
08-29-2018
09:47 PM
I tried doing it. Indeed they match
... View more
08-28-2018
11:23 PM
uname -a returns NON-FQDN hostname. Can I edit /proc/sys/kernel/hostname and change the value to FQDN hostname? At the same time, I have 2 servers for which it's alright and does not change after 24 hours of restart (I guess I did not mention I start getting the different names after 24 hours of server restart , turns out it happens on network restart, something we don't do but GCP does). That one also has NON-FQDN name in /proc/sys/kernel/hostname and there it however works fine
... View more
08-25-2018
10:04 AM
Same issue. Were you able to fix this?
... View more
08-16-2018
11:13 PM
I am getting this error in Hive queries.CM shows hive service is up and running just fine without any exits. Recently had connected to Hive through Tableau as well. Usually happens around the same time in morning,
How do I fix this? What could be the issue? No heavy processing on server when this happens. very few and small queries running at the time it happens.
Log Upload Time: Fri Aug 17 11:29:53 +0530 2018
Log Length: 959
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/1237/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Connecting to jdbc:hive2://ip-172-31-4-192.ap-south-1.compute.internal:10000/default
Unknown HS2 problem when communicating with Thrift server.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://ip-172-31-4-192.ap-south-1.compute.internal:10000/default: java.net.SocketException: Connection reset (state=08S01,code=0)
No current connection
Intercepting System.exit(2)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2]
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Manager
07-02-2018
07:15 AM
My Sqoop jobs and Hive queries are randomly getting killed: All I get in job logs in: Diagnostics: Application killed by a user. I know for sure that these jobs are not being killed by anyone. My RM is running in HA mode. All my services are up and running without any warnings. I don't think it could be a memory issue since I can see my servers have available memory and it happens even when very few jobs are running.Please help.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sqoop
-
Apache YARN
05-25-2018
10:40 AM
I am using --warehouse-dir argument for loading data in HDFS before sqoop puts it into hive. I am running all my sqoop jobs through oozie. Now, if the task fails for some reason, it is reattempted and the problem here is that the warehouse dir created by previous task is still there and the task re-attempt fails with error : output directory already exists. I understand I could use direct argument to skip intermediate loading in HDFS step but I need to use drop import hive delims argument as well and that's not supported with Hive. Advice, please? It's important.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
-
Apache Sqoop
-
HDFS