Member since
12-21-2017
67
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1081 | 10-15-2018 10:01 AM | |
4055 | 03-26-2018 08:23 AM |
04-01-2018
08:38 PM
I have submitted a spark java program via "Spark Submit Jar" and it looks running well.
However, when I click the logs link in specified application in Job tab in hue, it shows "cannot acces: /jobbrowser/jobs/appliacation_****/single_logs."
So how can I find logs of running spark application?
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera Hue
03-26-2018
08:23 AM
Fixed it by recovering spark home setting
... View more
03-23-2018
02:58 AM
I am testing spark within zeppelin. But in running tutorial %spark2.spark
spark.verson
It throws the following error: java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:391)
at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:380)
at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:828)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:483)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Then I disable the hive context according to https://stackoverflow.com/questions/43289067/getting-nullpointerexception-when-running-spark-code-in-zeppelin-0-7-1 , the same exception is still thrown. How to solve it? ========================================================= Update 1: I have checked the spark interpreter log, and get the following error: requirement failed:/python/lib/pyspark.zip not found;cannot run pyspark application in YARN mode. How to locate this file or config the path?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
03-19-2018
03:10 AM
I am trying on installing and running Apache Ranger, however, it throws "Java patch PatchPasswordEncryption_J10001is being applied by some other process" warning and stuck on this stage. I have followed this instruction https://community.hortonworks.com/content/supportkb/148592/errorjava-patch-patchpasswordencryption-j10001-is.html , but it still not work. Can anyone help me?
... View more
Labels:
- Labels:
-
Apache Ranger
03-13-2018
08:45 AM
Thanks, I have resolved it. I was regarding "Service Check" as "Pre-upgrade check".
... View more
03-13-2018
08:09 AM
When I try to upgrade hdp version from 2.6.1 to 2.6.4, I meet the following pre-upgrade checks error : The following service configurations have been updated and their Service Checks should be run again: HDFS, OOZIE,ZOOKEEPER,HIVE,FLUME,KAFKA,SPARK2
Failed on: HDFS, OOZIE,ZOOKEEPER,HIVE,FLUME,KAFKA,SPARK2
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
03-08-2018
02:25 AM
I am trying to read data from kafka and save to parquet file on hdfs. My code is similar to following, that the difference is I am writing in Java. val df = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
.option("subscribe", "topic1")
.load()
df.selectExpr("CAST(key AS STRING)","CAST(value AS STRING)").writeStream.format("parquet").option("path",outputPath).option("checkpointLocation", "/tmp/sparkcheckpoint1/").outputMode("append").start().awaiteTermination()
However it threw "Uri without authority: hdfs:/data/_spark_metadata" exception, where "hdfs:///data" is the output path. When I change the code to spark.read and df.write to write out parquet file once, there is no any exception, so I guess it is not related to my hdfs config. Can anyone help me?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
01-10-2018
10:05 PM
Hi, buddies,
I meet a problem when I tried to install cdh using mysql as my external database.
After I installing the mysql and configuring it well, the cloudera scm server service log shows :
Tabes hive unsupported engin type [MyISAM, CSV]. InnoDB is required,. Table mapping : ****
I have set the innodb as the default engine of Mysql, and tried to change the existed tables to InnoDB engine. However, InnoDB engine cannot be applied on some tables like user, tables_priv and slow_log.
The error message is:
ERROR 1579: This storage engine cannot be used for table ***
How to resolve it?
Thanks a lot!
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
01-10-2018
07:51 PM
Thanks a lot! It may be a good idea for my issue!
... View more
01-10-2018
07:45 PM
Hi , I have tried it, however I have no authority to change the hosts file on my computer since the computer is belong to company and I am not an administrator user. Even though I can access the web ui directly by changing local host file, it is still not available for other users. After all I cannot force every user to change his/her local host file.
... View more
- « Previous
- Next »