Member since
12-10-2015
58
Posts
24
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1762 | 02-17-2016 04:12 AM | |
2947 | 02-03-2016 05:15 AM | |
1633 | 01-27-2016 09:13 AM | |
4136 | 01-27-2016 07:00 AM | |
2102 | 01-02-2016 03:29 PM |
11-08-2016
06:35 AM
1 Kudo
I am just wondering has anybody come across the scenario where you need to import or read the data from excel to Hadoop? Is there such thing like Flume Excel source around? btw, I know I can convert the excel file to csv then deal with it. Really just trying to explore flume source a bit further here.
... View more
Labels:
- Labels:
-
Apache Flume
02-17-2016
04:12 AM
I completed this task by dowloading hwi.*.war file from 0.12 version of hive as i didn't find it in 0.13 and 0.14
... View more
02-15-2016
08:41 AM
1 Kudo
What are the prerequisites dor starting HWI on HDP 2.2
... View more
Labels:
- Labels:
-
Apache Hive
02-09-2016
04:02 PM
I installed Hive ODBC Driver for HDP 2.2 on my windows 7 machine and trying to connect to hive through ODBC(hadoop istalled on CENTOS).I encoutered with following error. configs are all default. For example authentication for hiveserver2 is "none"(default).Is anything i missed out.I followed the document of hortoworks.I gave the server ip and port is 10000.I assumed hiveserver2 is running because beeline command line is working for following command
beeline -u jdbc:hive2://ip:10000
... View more
Labels:
- Labels:
-
Apache Hive
02-03-2016
07:01 AM
As @Gangadhar Kadam said it has problem in 0.13 but works fine in 0.14
... View more
02-03-2016
05:15 AM
1 Kudo
The configuration variable "sqoop.export.records.per.statement" can be set to 1 as a workaround for this problem. https://issues.apache.org/jira/browse/SQOOP-314
... View more
01-29-2016
03:35 PM
Yeah ...@Artem Ervits i got your point.simple but logical.
... View more
01-29-2016
10:31 AM
Hi All, According to my requirement i need script like following A = load '/bsuresh/sample' USING PigStorage(',') as (id,name,sal,deptid);
B = GROUP A by deptid;
C = foreach B {
D = A.name,A.sal;--two fields
E = DISTINCT D;
generate group,COUNT(E);
};
In relation 'D', i am extracting two fields.Where exactly i am facing error. If i chnaged the script like,which is working fine. C = foreach B {
D = A.name; --one filed
E = DISTINCT D;
generate group,COUNT(E);
}; But i need count based on distinct of two columns .Can any one help me??
... View more
Labels:
- Labels:
-
Apache Pig
01-27-2016
09:16 AM
use ISO time pattern instead of dd-MMM-yyyy from my code
... View more
01-27-2016
09:13 AM
2 Kudos
all the comments mentioned here are correct,this is small example
emp = load 'data' using PigStorage(',') as (empno,ename ,job,mgr,hiredate ,sal,comm,deptno);
each_date = foreach emp generate ToDate(hiredate,'dd-MMM-yyyy') as mydate;
subt = foreach each_date generate mydate,SubtractDuration(mydate,'PT1M');
dump subt;
... View more