Member since
12-06-2016
40
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1774 | 01-03-2017 02:53 PM | |
2726 | 12-29-2016 05:02 PM | |
8691 | 12-22-2016 06:34 PM |
02-08-2017
09:25 PM
@mhegedus thank you very much 🙂
... View more
02-08-2017
03:30 PM
Hello, Trying to achieve LAB 4 in Hello World series, I'm facing following error when loading data into Hbase table hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -Dimporttsv.columns="HBASE_ROW_KEY,events:driverId,events:driverName,events:eventTime,events:eventType,events:latitudeColumn,events:longitudeColumn,events:routeId,events:routeName,events:truckId" driver_dangerous_event hdfs://sandbox.hortonworks.com:/tmp/data.csv
SyntaxError: (hbase):8: syntax error, unexpected ','
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=, -Dimporttsv.columns="HBASE_ROW_KEY,events:driverId,events:driverName,events:eventTime,events:eventType,events:latitudeColumn,events:longitudeColumn,events:routeId,events:routeName,events:truckId" driver_dangerous_event hdfs://sandbox.hortonworks.com:/tmp/data.csv
^
It looks that it does'nt accept the separator syntax!! any idea please?
... View more
Labels:
- Labels:
-
Apache HBase
01-20-2017
09:23 PM
Hi Aldo, in HDFS. The parameter is called "NameNode Java heap size".
... View more
01-05-2017
10:39 AM
@Michael Young Hi, Long type is not permitted in Hive, trying to use float for riskfactor column, but error persist. any idea please? log-error.txt
... View more
01-04-2017
06:57 PM
@Michael Young Hi, Why it try to convert the data type? the statement is "create as select", then the result table columns data type should follow the origin table. Why converting the data type of this column? Also, double data type is supported by Hive and when trying the same query in Hive it works fine (create table riskfactor as select * from finalresults;)
... View more
01-04-2017
06:36 PM
1 Kudo
Hi, Trying to finalize lab 4 : Riskfactor Analysis with Spark. When executing the final instruction, I got the following error: %spark
hiveContext.sql("create table riskfactor as select * from finalresults") org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in
stage 0.0 (TID 7, vds001.databridge.tn): java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.io.DoubleWritable cannot be cast to
org.apache.hadoop.io.LongWritable Enclosed all log records. Any idea please?spark-error.txt
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
01-04-2017
06:29 PM
Now it works fine thank you.
... View more
01-04-2017
05:34 PM
Hi, Trying to complete Lab 4 (Riskfactor Analysis with Spark), i got the following error for the last instruction: %spark
hiveContext.sql("create table riskfactor as select * from finalresults") <console>:28: error: not found: value hiveContext hiveContext.sql("create table riskfactor as select * from finalresults") ^ Any idea please? Regards,
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Apache Zeppelin
01-04-2017
10:59 AM
1 Kudo
Hi, Trying to finalize lab 4 : Riskfactor Analysis with Spark. All works fine till final step, save data into riskfactor table with "create table as select" statement. I got a connection problem. Hive works fine. Any idea please? java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:51)
at org.apache.zeppelin.interpreter.remote.ClientFactory.create(ClientFactory.java:37)
at org.apache.commons.pool2.BasePooledObjectFactory.makeObject(BasePooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.getClient(RemoteInterpreterProcess.java:189)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:258)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:281)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:328)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) following the last executed command : %spark hiveContext.sql("create table riskfactor as select * from finalresults")
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Apache Zeppelin
01-03-2017
02:53 PM
@Jay SenSharma Hi, I added the IP@ of the VDS to my local hosts file and it's done 🙂 Thank you for your help.
... View more