Member since
04-17-2018
9
Posts
0
Kudos Received
0
Solutions
05-28-2018
12:23 PM
Thank you @Nishant Bangarwa! "No module psycopg2" solved! Though I am getting another error now: Error: Fatal: no pg_hba.conf entry for host "x.x.x.x", user "x", database "x", SSL off. Plus, I am not sure about the user & password for my Postgres database
... View more
05-25-2018
01:54 PM
I am trying to add a postgres database to superset. This is my connection URI: postgresql://username:password@localhost/mydatabase. I keep getting the following error while trying to test connection: No module named psycopg2 !
I made sure I installed this module using pip install psycopg2 in shell but it's still not working!
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
05-21-2018
10:31 AM
I was trying to run this line : val fraud = sc.textFile("hdfs://sandbox-hdp.hortonworks.com:8020/tmp/fraud.csv") but then I kept getting this error (although it worked on spark shell!) java.io.IOException: Failed to create local dir in /tmp/blockmgr-c40d2915-3861-4bbe-8e1c-5eca677c552e/0e.
at org.apache.spark.storage.DiskBlockManager.getFile(DiskBlockManager.scala:70)
at org.apache.spark.storage.DiskStore.remove(DiskStore.scala:135)
at org.apache.spark.storage.BlockManager.removeBlockInternal(BlockManager.scala:1457)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:991)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029)
at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:792)
at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1350)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:122)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1488)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1037)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1029)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1029)
at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:832)
at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:830)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:830)
... 48 elided
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
05-08-2018
04:51 PM
3. The output is:
... View more
05-08-2018
04:44 PM
Thank you, @Felix Albani! Actually, besides that error message, anything I try on zeppelin with pyspark, keeps pending and never run. So, I've tried everything you mentioned: 1. Some of the info I got: INFO [2018-05-05 05:59:31,823] ({pool-1-thread-1} PySparkInterpreter.java[interrupt]:421) - Sending SIGINT signal to PID : 7627
WARN [2018-05-05 06:39:08,446] ({ResponseProcessor for block BP-32082187-172.17.0.2-1517480669419:blk_1073742956_2146} DFSOutputStream.java[run]:958) - DFSOutputStream ResponseProcessor exception for block BP-
32082187-172.17.0.2-1517480669419:blk_1073742956_2146 java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2468)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:849)
2. I've tried restarting it but it doesn't work!
... View more