Member since
10-28-2015
61
Posts
10
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1542 | 09-25-2017 11:22 PM | |
5727 | 09-22-2017 08:04 PM | |
5205 | 02-03-2017 09:28 PM | |
3657 | 05-10-2016 05:04 AM | |
1027 | 05-04-2016 08:22 PM |
05-10-2016
05:04 AM
@Pradeep kumar When a application is currently running it will not be available in JobHistory UI until it is finished. As you correctly identified its tracking id will point to "Application Master". Once it is finished link will point to history server. After the completion of a map reduce job, logs are written to hdfs under the directory specified by mapreduce.jobhistory.intermediate-done-dir. History server continuously scans the intermediate directory and pulls any new logs if available and copies those logs to the directory specified by mapreduce.jobhistory.done-dir
... View more
05-04-2016
08:22 PM
@Bindu Nayidi You can edit corresponding log4j file. Ambari-> <Service> -> configs -> advanced log4j
... View more
05-02-2016
11:07 PM
Hi @simran kaur Change ozie.libpath=hdfs://serverFQDN:8020/user/oozie/share/lib to ozie.libpath=hdfs://serverFQDN:8020/user/oozie/shared/lib as you mentioned your oozie lib is under /user/oozie/shared/lib dir.
... View more
05-02-2016
11:02 PM
As others explained it depends on number of containers available on your cluster.
... View more
05-02-2016
06:25 PM
Adding HADOOP_TOKEN_FILE_LOCATION resolved the issue. -D mapreduce.job.credentials.binary=$HADOOP_TOKEN_FILE_LOCATION
... View more
04-29-2016
07:47 PM
Does your oozie shared lib in hdfs have sqoop related dependencies?
If not set up sharedlib following this link
... View more
04-29-2016
07:39 PM
1 Kudo
You are not supposed to edit config files manually. Use Ambari to make required change. If you can't see that property in corresponding service you can add new property under "Custom <file-name>" section.
... View more
04-29-2016
07:32 PM
You might be right. In my previous experiences with HBase (with high write throughput requirements) every time client timed out and were not able to establish connection back until major compaction was over.(To be precise connection was not blocked or lost as soon as major compaction started. But gradually connection died and client were not able to reconnect until major compaction was over). It might be a side effect.
... View more
04-28-2016
06:54 PM
Oozie java action is failing with below error. We have tried passing credentials from oozie as well as principal,keytab based authentication from driver using below config. conf.set("hadoop.security.authentication", "kerberos");
conf.set("java.security.krb5.conf", "/etc/krb5.conf");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab("principal", "keytab");
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, java.io.IOException: Can't get Master Kerberos principal for use as renewer
org.apache.oozie.action.hadoop.JavaMainException: java.io.IOException: Can't get Master Kerberos principal for use as renewer
at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:58)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Oozie
04-22-2016
09:14 PM
What is the exact error you are getting? Can you access that table in hive shell/beeline via same user? If not check HBase ACL or Ranger permissions.(if enabled for hbase) Also check hive,hbase logs for errors/exception when you connect.
... View more