Member since
03-26-2017
61
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3059 | 08-27-2018 03:19 PM | |
25259 | 08-27-2018 03:18 PM | |
9714 | 04-02-2018 01:54 PM |
07-05-2018
01:11 PM
How to configure it, i have the code part.
... View more
07-05-2018
11:41 AM
Hi All, i have a scenario like follows, need to split and save my data frame in to multiple partitions based on date and when writing i need to take recorrd count. is there any option to take record count when writing Dataframe pls comment.
... View more
07-05-2018
11:28 AM
Hi, I want to Read files from Remote Hadoop cluster (A) with HA and load it into Cluster(B), please let me know if there are ay options.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
06-13-2018
05:21 PM
It's already installed and the issue resolved now, tanx for you response.
... View more
06-13-2018
05:19 PM
Tanx for your help @Felix Albani, it supported me to run without any platform modification
... View more
06-11-2018
01:01 PM
Hi All, I'm getting the following Error when im trying to submit a spark job to read a sequence file. 18/06/07
19:35:25 ERROR Executor: Exception in task 8.0 in stage 16.0 (TID 611) java.lang.RuntimeException: native snappy
library not available: this version of libhadoop was built without snappy
support.
at
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
at
org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193)
at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1985)
at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1880)
at
org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1829)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1843)
at
org.apache.hadoop.mapred.SequenceFileRecordReader.<init>(SequenceFileRecordReader.java:49)
at
org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:64)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:251)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:250)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) Following are my details: 1) Spark 2.2.1 2) Scala 2.11.8
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
04-24-2018
02:02 PM
Hi All, Could someone help me out how to start fetching data from YARN Rest API using Java. please share me some sample links. Tanx and Regards, MJ
... View more
Labels:
- Labels:
-
Apache YARN
04-23-2018
08:57 AM
Tanx @Pierre Villard
... View more
04-02-2018
01:54 PM
Issue resolved by adding SBT dependency to my project based on my Hive-metastore version available in hive->lib directory.
... View more
03-28-2018
06:27 AM
@Rahul Soni Im Using Spark <2.2.1> and Hive <2.4.2.129-1> still im getting this issue.
... View more