Member since
04-25-2016
579
Posts
609
Kudos Received
111
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2331 | 02-12-2020 03:17 PM | |
1632 | 08-10-2017 09:42 AM | |
11108 | 07-28-2017 03:57 AM | |
2642 | 07-19-2017 02:43 AM | |
1954 | 07-13-2017 11:42 AM |
05-24-2016
05:36 AM
3 Kudos
@Sai Satish you can achieve it in this way create table aTable(a int,b array<String>);insert into table aTable select 1,array('a','b') from dummyTable;hive> select * from aTable;
OK
1["a","b"]
1["a","b"]
select a,expl_tbl from aTable LATERAL VIEW explode(b) exploded_table as expl_tbl;
Query ID = hive_20160524053317_27cb538e-43e1-436f-8744-ec53a0c9d3b2
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1463989024283_0004)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 01/01 [==========================>>] 100% ELAPSED TIME: 3.25 s
--------------------------------------------------------------------------------
OK
1a
1b
1a
1b
Time taken: 4.765 seconds, Fetched: 4 row(s)
... View more
05-19-2016
11:59 AM
1 Kudo
you can but you need to change the port of one of the service.
... View more
05-19-2016
11:27 AM
2 Kudos
@R Wys There is no problem with hive here, hive has generated an execution plan with no reduce phase in your case. you can see the plan by running 'explain select*from myTable where daily_date='2015-12-29' limit 10'
... View more
05-17-2016
05:24 AM
3 Kudos
this is a good reference to create and submit spark-sql job https://databricks.gitbooks.io/databricks-spark-reference-applications/content/logs_analyzer/chapter1/sql.html, hope it will help
... View more
05-11-2016
10:21 AM
2 Kudos
I think its unsupported- http://www.cloudera.com/documentation/enterprise/latest/topics/impala_file_formats.html
... View more
05-11-2016
09:48 AM
1 Kudo
These are some decks to know acid transactions http://www.slideshare.net/Hadoop_Summit/hive-does-acid, hope it will help
... View more
05-09-2016
06:04 AM
4 Kudos
could you please try adding <job-xml>sqoop-site.xml</job-xml> into your action and see whether it work.
... View more
05-06-2016
04:21 PM
1 Kudo
its better to export it as csv or any delimited format and load it into hive table.
... View more
05-06-2016
11:36 AM
@Amit Dass can you try this ALTER TABLE jsont1 SET LOCATION "hdfs://mycluster:8020/jsam/j1";
... View more
05-06-2016
10:06 AM
can you post the full dfs location from the output of the below command describe formatted jsont1;
... View more
- « Previous
- Next »