Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2786 | 09-01-2017 06:26 AM | |
1698 | 05-04-2017 07:09 AM | |
1460 | 09-12-2016 05:58 PM | |
2061 | 07-22-2016 05:22 AM | |
1626 | 07-21-2016 07:50 AM |
03-07-2016
07:47 AM
1 Kudo
@Neeraj Sabharwal : I have checked and it has data as it is running with mr execution engine.
... View more
03-07-2016
06:53 AM
1 Kudo
Yes @Neeraj Sabharwal: I am using Hive 1.2.1 and it seems it has been fixed in Hive 1.2.1 and higher version as per jira.But I am getting this error.
... View more
03-07-2016
06:27 AM
2 Kudos
Hello frndz, Can you please help me to understand below error. Actually when I am doing union all join with two tables and creating one table with the help of this join then I am getting below error on tez execution engine. Failed with exception MetaException(message:Invalid partition key & values; keys [feed_date, ], values []) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask But when I am running it on mr then it is running fine. I am using hdp2.3. SQL: hive> INSERT OVERWRITE TABLE clickstream_kpis.cs_android_event_lookup_tbl PARTITION (feed_date) > SELECT a.* > FROM > (SELECT hitid_high, > hitid_low, > event_number, > feed_date > FROM clickstream_db.clickstream_android LATERAL VIEW explode(SPLIT (post_event_list,','))expld AS event_number > WHERE feed_date BETWEEN '2016-01-01' AND '2016-01-31' > AND post_event_list IS NOT NULL > UNION ALL SELECT hitid_high, > hitid_low, > post_event_list AS event_number, > feed_date > FROM clickstream_db.clickstream_android > WHERE feed_date BETWEEN '2016-01-01' AND '2016-01-31' > AND post_event_list IS NULL)a; Query ID = hdpbatch_20160304145048_5262fb30-6ed6-4a7c-ad5d-ca30a1bc57d6
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Tez
03-04-2016
07:10 PM
1 Kudo
@Shishir Saxena: Thanks for reply. Actually when I tried above location then it failed like below as expected. root@m1 ~]# hadoop fs -ls jceks://hdfs/user/ ls: No FileSystem for scheme: jceks But when I did ls to my user inside hdfs then it listed out that file. [root@m1 ~]# hadoop fs -ls /user/root/ Found 6 items drwxr-xr-x - root hdfs 0 2016-01-25 23:30 /user/root/.hiveJars drwx------ - root hdfs 0 2016-02-29 04:31 /user/root/.staging drwxr-xr-x - root hdfs 0 2016-02-24 18:16 /user/root/OozieTest -rwxr-xr-x 3 root hdfs 1484 2016-02-03 21:19 /user/root/Output.json -rwx------ 3 root hdfs 504 2016-03-02 04:14 /user/root/mysql.password.jceks [root@m1 ~]# hadoop fs -cat /user/root/mysql.password.jceks encodedParamst[B[encryptedContentq~Lsun.paramsAlgtLjava/lang/String;LsealAlgq~xpur[B??T?xp0xrjavax.crypto.SealedObject>6=?÷Tp[ _ܬ??uq~?"?5?????-?y?L;XF6??zQ !z???????"???>I?cU?ɾ! So It gave my question's answer. Thanks once again.
... View more
03-04-2016
07:33 AM
2 Kudos
long time waited thing is done and I am very happy to see that we have a got a way to secure and encrypt password in sqoop. "As of Sqoop 1.4.5, Sqoop supports the use of JAVA Key Store to store passwords, so that you do not need to store passwords in clear text in a file." root@m1 ~]# hadoop credential create mydb.password.alias -provider jceks://hdfs/user/root/mysql.password.jceks Enter password: Enter password again: mydb.password.alias has been successfully created. org.apache.hadoop.security.alias.JavaKeyStoreProvider has been updated. But I have few questions. 1. Where is mydb.password.alias file saved in local machine or hdfs? 2. When we schedule sqoop jobs in oozie or falcon or cron then do we need to create key for that user's home dir whoever is running jobs. 3. Can we see content of mydb.password.alias file ?
... View more
Labels:
03-03-2016
12:01 PM
2 Kudos
@John D.: HCC provide a very good and handy documents to start with hadoop. Also you can visit to http://www.hadoopadmin.co.in/ to start with hadoop.
... View more
03-02-2016
02:22 PM
1 Kudo
@Neeraj Sabharwal : Thanks for your support, I found a issue actually there was an misconfiguration in hdfs-site.xml file. I did not add target cluster HA properties to client hdfs-site.xml and because of that it was failing. but now it is working fine.
... View more
03-01-2016
12:39 PM
@Artem Ervits: I tried with external dir as well but getting below error. [s0998dnz@lxhdpmastinf001 ~]$ hadoop --config conf/ distcp hdfs://HDPINFHA/user/s0998dnz/sampleTest.txt hdfs://HDPTSTHA/user/root/
16/03/01 07:40:35 ERROR tools.DistCp: Invalid arguments:
java.lang.IllegalArgumentException: java.net.UnknownHostException: HDPTSTHA
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:406)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176
... View more
02-29-2016
12:58 PM
@Artem Ervits: When I changed dfs.nameservices to both cluster then I am not able to restart hdfs services. resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://m1.hdp22:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'' returned status_code=403.
{
"RemoteException": {
"exception": "StandbyException",
"javaClassName": "org.apache.hadoop.ipc.StandbyException",
"message": "Operation category READ is not supported in state standby"
}
}
... View more
02-29-2016
12:34 PM
1 Kudo
@Neeraj Sabharwal: I followed the same but still getting same error.
... View more