Member since
09-25-2015
109
Posts
36
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2842 | 04-03-2018 09:08 PM | |
3978 | 03-14-2018 04:01 PM | |
11109 | 03-14-2018 03:22 PM | |
3171 | 10-30-2017 04:29 PM | |
1596 | 10-17-2017 04:49 PM |
02-19-2016
05:11 PM
@Artem Ervits yes,defaultFS2=hdfs://... is set in job.properties.
... View more
02-19-2016
03:45 PM
@Artem Ervits how would oozie.launcher.mapreduce.job.hdfs-servers resolve the ${defaultFS2}. From job.properties, I am passing e.g. hdfs://xyz where dfs.nameservices = xyz. This xyz is not a host name, hence getting error "Causedby: java.net.UnknownHostException: xyz" . appology for the confusion. I do have all the variables replaced in the job.properties to match the cluster.
... View more
02-16-2016
09:14 PM
2 Kudos
In 2 same REALM Kerberized cluster having Namenode HA enabled, how to resolve ${nameservice2} in oozie.launcher.mapreduce.job.hdfs-servers without changing Oozie Server configuration. Running the workflow on defaultFS1. WorkFlow: <workflow-app xmlns='uri:oozie:workflow:0.3' name='shell-wf'>
<start to="copyHFilesToRemoteClusterAction"/>
<action name="copyHFilesToRemoteClusterAction">
<distcp xmlns="uri:oozie:distcp-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${defaultFS1}</name-node>
<configuration>
<property>
<name>dfs.nameservices</name>
<value>${nameService1},${nameService2}</value>
</property>
<property>
<name>dfs.ha.namenodes.${nameService2}</name>
<value>${nn21},${nn22}</value>
</property>
<property>
<name>dfs.namenode.rpc-address.${nameService2}.${nn21}</name>
<value>${nn21_fqdn}:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.${nameService2}.${nn22}</name>
<value>${nn22_fqdn}:8020</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.${nameService2}</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
<value>${defaultFS1},${defaultFS2}</value>
</property>
</configuration>
<arg>${defaultFS1}/${workDir}</arg>
<arg>${defaultFS2}/${workDir}</arg>
</distcp>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
Error: 2016-02-16 15:37:30,276 WARN ActionStartXCommand:546 - USER[bborah1] GROUP[-] TOKEN[] APP[shell-wf] JOB[0000xyz-000000000000000-oozie-oozi-W] ACTION[0000xyz-000000000000000-oozie-oozi-W@copyHFilesToRemoteClusterAction] Error starting action [copyHFilesToRemoteClusterAction].
ErrorType [TRANSIENT], ErrorCode [JA001], Message [JA001: ${nameservice2}]
org.apache.oozie.action.ActionExecutorException: JA001: ${nameservice2}
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:392)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:980)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1135)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:323)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:252)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.UnknownHostException: ${nameservice2}
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:725)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:462)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:965)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
02-12-2016
03:42 PM
@Matthew Sharp Yes I have. Ambari would not know about the Load Balancer details and hence would not update / append the HTTP/<loadbalancer_hostname>@<realm> to the spnego.service.keytab
... View more
12-21-2015
08:23 PM
1 Kudo
Follow instructions as per documentation. http://docs.hortonworks.com/HDPDocuments/Ambari-2.... In addition: In a Kerberized Environment, Create new AD Account for HTTP/<loadbalancer_hostname>@<realm> Append keytab for AD Account into spnego.service.keytab on all hosts running oozie servers referenced by the loadbalancer. Keytabs could be appended as follows -- ktutil
addent -password -p HTTP/<loadbalancer_hostname>@<realm> -k 1 -e rc4-hmac
wkt /etc/security/keytabs/spnego.service.keytab klist -ekt spnego.service.keytab Keytab name: FILE:spnego.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
...
1 12/17/15 14:45:02 HTTP/<loadbalancer_hostname>@<realm> (arcfour-hmac) After keytabs are updated, Restart Oozie service from Ambari UI.
... View more
11-21-2015
08:26 PM
1 Kudo
Neeraj Sabharwal This is great for fetching all tables from Mysql, however, To avoid Error "FAILED: SemanticException [Error 10001]: Table not found Table Name" The following would work. mysql -u hive -p -e "select concat( 'show create table ' , T.NAME , '.', T.TBL_NAME,';') from (select DBS.NAME, TBLS.TBL_NAME from TBLS left join DBS on TBLS.DB_ID = DBS.DB_ID) T" hive > /tmp/file.ddl
##remove header in file.sql
hive -f /tmp/file.ddl > tmp/create_table.ddl
... View more
11-21-2015
02:51 AM
Hi @Deepesh yes finally I was able to migrate using similar steps of db backup, restore & point. I will update the steps soon. Only applicable in clean env. Doesn't seem to be a clean approach in case there is already pre-existing database & tables in hive . We should have a way to extract ddl scripts for all tables / database either from MySQL or hive & run it on new env in hive.
... View more
11-21-2015
02:31 AM
yes @Pradeep, you are right .. It is hdp 2.3.2 & hive 1.2.1
... View more
11-20-2015
10:04 PM
2 Kudos
We have a Old Cluster with HDP 2.0.6 (Hive 0.12.0).. New Cluster with HDP 2.3.2 (Hive 1.2.1). We need to hive query the same tables in new environment and old environment after data forklift in same manner (implying database names, partitioning, etc. work). We migrated all the hdfs data over using distcp.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
11-04-2015
04:35 PM
Hi Vladimir, I am trying bind Oozie on specific IP. Upon updating "OOZIE_HTTP_HOSTNAME", and restarting Oozie, I do not see Oozie bind to specific ip. netstat -pltn | grep 11000 tcp 0 0 0.0.0.0:11000 0.0.0.0:* LISTEN 22072/java
... View more