Member since
08-02-2016
15
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9759 | 08-03-2016 09:35 PM |
08-03-2016
09:35 PM
I figured out what the problem was. It was the way I was creating the test data. I was under the impression that if I run the following commands: create table mydb.mytable1 (empno int, name VARCHAR(20), deptno int) stored as orc;
INSERT INTO mydb.mytable1(empno, name, deptno) VALUES (1, 'EMP1',100);
INSERT INTO mydb.mytable1(empno, name, deptno) VALUES (2, 'EMP2',50);
INSERT INTO mydb.mytable1(empno, name, deptno) VALUES (3, 'EMP3',200); Data would be created in the ORC format at: /apps/hive/warehouse/mydb.db/mytable1 Turns out that's not the case. Even though I indicated 'stored as orc' the INSERT statements didn't save the column information. Not sure if that's expected behavior. In any case, it all works now. Apologies for the confusion but hopefully this will help someone in future -:)
... View more
08-03-2016
02:21 AM
Tried it but it doesn't work. Columns are still '_col0', '_col1', '_col2'
... View more
08-02-2016
11:48 PM
Not sure I understand the answer. I need to run "select * from yourtable" to get column names populated? Perhaps "select * from yourtable limit 1". I can try this, but shouldn't the column names be populated from Metastore as soon as I do a 'load'?
... View more
08-02-2016
07:26 PM
When I run the following: <code>val df1 = sqlContext.read.format("orc").load(myPath)
df1.columns.map(m => println(m))
The columns are printed as '_col0', '_col1', '_col2' etc. As opposed to their real names such as 'empno', 'name', 'deptno'. When I 'describe mytable' in Hive it prints the column name correctly, but when I run 'orcfiledump' it shows _col0, _col1, _col2 as well. Do I have to specify 'schema on read' or something? If yes, how do I do that in Spark/Scala? <code>hive --orcfiledump /apps/hive/warehouse/mydb.db/mytable1
.....
fieldNames:"_col0"
fieldNames:"_col1"
fieldNames:"_col2"
As suggested elsewhere I've added '--files' BEFORE '--jars' as follows:
spark-submit \
--master yarn \
--deploy-mode cluster \
--class xxx.xxxx.MyDriver \
--files hive-site.xml \
--jars datanucleus-api-jdo-3.2.6.jar,datanucleus-core-3.2.10.jar,datanucleus-rdbms-3.2.9.jar \
--name MyDriver \
--num-executors 1 \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
./my-utils-1.0-SNAPSHOT.jar
Note: I created the table as follows: <code>create table mydb.mytable1 (empno int, name VARCHAR(20), deptno int) stored as orc;
Note: This is not a duplicate of this issue (Hadoop ORC file - How it works - How to fetch metadata) because the answer tells me to use 'Hive' & I am already using HiveContext as follows: <code>val sqlContext =new org.apache.spark.sql.hive.HiveContext(sc)
By the way, I am using my own hive-site.xml, which contains following: <code><configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://sandbox.hortonworks.com:9083</value>
</property>
</configuration>
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark