Member since
07-25-2018
20
Posts
1
Kudos Received
0
Solutions
04-09-2020
08:26 AM
So Presto now supports ACID tables, but only for Hive3. However, the subdirectory exception is from a configuration on the presto client side. In the hive.properties in presto's catalog directory, add "hive.recursive-directories=true"
... View more
07-30-2018
08:48 PM
Hello @schhabra thank you for your reply I updated hbase-site.xml in 2 locations - /usr/hbase/conf/hbase-site.xml and /usr/phoenix/conf/hbase-site.xml PQS and Hbase service are restarted after the change. I still get the same error. Client and server are on the same EMR Master ec2 node.
... View more
10-15-2018
12:24 PM
All columns mapped as VARCHAR.Thanks
... View more
06-06-2018
11:12 PM
1 Kudo
Hey @cskbhatt Did you tried to execute the following line? ./bin/install-interpreter.sh --name jdbc --artifact org.apache.zeppelin:zeppelin-jdbc:0.7.3 Here's the links that i've searched. https://mvnrepository.com/artifact/org.apache.zeppelin/zeppelin-jdbc/0.7.3 https://zeppelin.apache.org/docs/0.7.3/manual/interpreterinstallation.html Hope this helps!
... View more
05-30-2018
10:26 PM
I told you to create the hive table with serde Syntax: create table <dbname>.<tablename)(a string,b string,c string,d string,e string,f string,g string,h string,i string,j string)
ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe'
STORED AS
INPUTFORMAT "parquet.hive.DeprecatedParquetInputFormat"
OUTPUTFORMAT "parquet.hive.DeprecatedParquetOutputFormat";
... View more
05-28-2018
04:26 PM
1 Kudo
@cskbhatt, i assume external table location is "hdfs://<emr node>:8020/poc/test_table/" This issue is happening because hdfs://<emr node>:8020/poc/test_table/.metadata/descriptor.properties is not a Parquet file, but exist inside table folder. When Hive ParquetRecordReader tries to read this file, its throwing above exception. Remove all non parquet files from table location & retry your query.
... View more