Member since
09-12-2014
48
Posts
4
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10637 | 03-16-2016 09:52 AM | |
6956 | 02-11-2016 09:54 AM | |
7323 | 06-15-2015 09:16 AM | |
7595 | 06-15-2015 02:14 AM | |
5131 | 06-10-2015 10:12 PM |
02-28-2024
09:51 PM
@ctrl_alt_delete, I have reached out to you with further details.
... View more
11-28-2021
09:22 PM
If you are facing issue Clock offset issue then there will be problem with your ntp is not in sync.
To resolve follow some commands
Check in which host it is giving problem go under that host through CLI and
1. systemctl status ntpd.service (For checking the status, if its not working the it showing Inactive message)
2. route -n copy the ntp server ip address (You will find under Destination column)
3. ntpdate <ntp server ip address>
4. systemctl start ntpd.service (for starting & syncing your host, after firing wait for sometime)
5. ntpstat (for checking whether it is synchronized or not it should be in synchronized)
after performing above steps your Clock offset issue will get resolved.
Regard,
... View more
06-07-2020
12:37 AM
i have the same issue i'm using spark 2.4.4 and hive 3.1.2 and hadoop 3.2.1: - error message GIven below -------------> i'm doing scala sbt project 13:03:38.626 [main] DEBUG org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset.
at org.apache.hadoop.util.Shell.checkHadoopHomeInner(Shell.java:469) ~[hadoop-common-3.1.0.jar:na]
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:440) ~[hadoop-common-3.1.0.jar:na]
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:517) ~[hadoop-common-3.1.0.jar:na]
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:2327) [hive-exec-1.2.1.spark2.jar:1.2.1.spark2]
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:365) [hive-exec-1.2.1.spark2.jar:1.2.1.spark2]
at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105) [hive-exec-1.2.1.spark2.jar:1.2.1.spark2]
at java.lang.Class.forName0(Native Method) [na:1.8.0_252]
at java.lang.Class.forName(Class.java:348) [na:1.8.0_252]
at org.apache.spark.util.Utils$.classForName(Utils.scala:238) [spark-core_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession$.hiveClassesArePresent(SparkSession.scala:1117) [spark-sql_2.11-2.4.4.jar:2.4.4]
at org.apache.spark.sql.SparkSession$Builder.enableHiveSupport(SparkSession.scala:866) [spark-sql_2.11-2.4.4.jar:2.4.4]
at UpsertFeature$.<init>(UpsertFeature.scala:20) [classes/:na]
at UpsertFeature$.<clinit>(UpsertFeature.scala) [classes/:na]
at UpsertFeature.main(UpsertFeature.scala) [classes/:na]
13:03:38.788 [main] DEBUG org.apache.hadoop.util.Shell - setsid exited with exit code 0
Exception in thread "main" java.lang.ExceptionInInitializerError
... View more
09-11-2019
07:04 PM
I had the same issue on one of the node and it was related with /etc/resolv.conf entry. Changed the nameserver details to that of other nodes and that fixed it.
... View more
07-31-2019
02:07 PM
@sparkd, While we can't be sure, it is likely that some permissions were changed on the /tmp directory so that the Service Monitor (that executes the HDFS canary health check) could not access the directory. Service Monitor utilizes the "hue" user and principal to access other resources so it is reasonable to assume that /tmp in HDFS did not allow the hue user or group to write to /tmp. Are you having similar trouble? If so, check your service monitor log file for stack traces and errors related to the hdfs canary.
... View more
06-19-2018
02:02 PM
I received a similar error, but for the LzopCodec (not LzoCodec) not being found. In this case, I actually wanted to use the LzoCodec as my default compression codec (for legacy reasons...). In order to get Hive working with Lzo on CDH 5.14.x, I did the following: 1)
# Add GPLEXTRAS parcel to CM and distribute to all nodes:
https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_ig_install_gpl_extras.html
2)
# Configure HDFS to use it:
http://www.roggr.com/2014/06/enabling-lzo-compression-for-hive-to.html
ClouderaManager -> Hive -> Configuration -> Service-wide -> Advanced -> Hive Auxiliary JARs Directory : /opt/cloudera/parcels/GPLEXTRAS/lib/hadoop/lib
# Verify it works by invoking 'show tables' in hive with debug-logging enabled. There should not be any errors. The error condition if not working complains about LzopCode (not LzoCodec) not found.
hive --hiveconf hive.root.logger=DEBUG,console
3)
# Configure HDFS
ClouderaManager -> HDFS -> Configuration -> Service-wide -> Compression Codes (io.compression.codecs):
+ com.hadoop.compression.lzo.LzoCodec
+ com.hadoop.compression.lzo.LzopCodec
4)
# Configure YARN/MR2
# http://blog.cloudera.com/blog/2013/07/one-engineers-experience-with-parcel/
ClouderaManager -> YARN/MR2 -> Configuration -> SearchFor: compress ->
mapreduce.output.fileoutputformat.compress: checked
mapreduce.output.fileoutputformat.compress.codec: com.hadoop.compression.lzo.LzoCodec
mapreduce.map.output.compress: checked
mapreduce.map.output.compress.codec: com.hadoop.compression.lzo.LzoCodec
... View more
10-24-2017
12:12 AM
There is a simple method to remove those. 1. List those directories inside a txt file like below hadoop fs -ls /path > test 2. cat -t test will give you positions of duplicate with junk character 3. open another shell and just try to comment it # to identify exact ones 4. again cat -t the file to confirm u commented the culprits 5. remove original folder frm list 6. for i in `cat list`; do hadoop fs -rmr $i; done
... View more
09-27-2017
06:58 AM
Hi @Shafiullah, So, Your job completes and still seeing it as failed. right ? Do you see any suspicious messages in full container logs ?
... View more
07-13-2017
03:28 AM
Starting a JVM like below will start it with 256MB of memory, and will allow the process to use up to 2048MB of memory: java -Xmx2048m -Xms256m More about...memory management
... View more
05-12-2016
07:16 AM
Thanks, this worked for me.
... View more