Member since
09-09-2017
12
Posts
2
Kudos Received
0
Solutions
09-11-2017
11:02 AM
Please provide the schema/database where table is available. It is clearly saying table is not available in default database. Go to beeline/hive and check show tables. By default it is checking in default database. org.apache.spark.sql.AnalysisException: Table not found: customer1; line 1 pos 14 at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:305) at Thanks, Manu
... View more
09-09-2017
09:27 PM
2 Kudos
You can try this as well in Unix. beeline - u 'your jdbc connection string' - - outputformat=csv2 -e "your query here" > output.csv Let me know if it helps. Thanks, Manu
... View more
09-09-2017
10:39 AM
Can you check two things: 1. No extra tab character in your DDL. 2. Does Decimal without precision work? I remember, we give decimal(15,2) in this format. I am not able to execute it now. I can try tomorrow and update.
... View more
09-09-2017
10:15 AM
Hi, Could you please try by creating external table on top of master directory and setting up below hive properties and read the table in HiveContext. SET mapred.input.dir.recursive=true; SET hive.mapred.supports.subdirectories=true; I hope, this will work. Thanks, Manu
... View more