Member since
12-11-2015
67
Posts
10
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1260 | 05-11-2016 04:36 PM | |
1666 | 01-28-2016 10:42 PM |
03-14-2019
03:31 PM
Hi, Can anyone help to resolve the issue? Its very urgent as OS upgrade is waiting on it.
... View more
03-14-2019
03:31 PM
Hi Jay Kumar SenSharma , Can you please address my issue with the host recovery from Ambari after OS upgraded from RHEL 6 to RHEL 7. https://community.hortonworks.com/questions/242900/recover-host-on-ambari-26-creating-only-kerberos-r.html
... View more
03-13-2019
09:25 PM
Hi, I am trying to upgrade RHEL OS from 6 to 7. As oart of it, I have migrated Ambari and Kerberos to RHEL 7 server and trying to use recover Host option in Ambari. It only pefforming kerberos lretaed work in prearing operations and not recovering any existing service. Belwo is the hostcomponentstate in Ambari. | 2 | 2 | NAMENODE | 2.6.0.3-8 | INSTALLED | 2 | HDFS | NONE | SECURED_KERBEROS This is very important to migrate OS from RHEl-6 to 7 Please help
... View more
Labels:
- Labels:
-
Apache Ambari
10-22-2018
04:50 PM
Hi, The problem resolved. NiFi service under Server manager list is visible after adding the below config under custom-ranger-admin-site.xml ranger.supportedcomponents=hive,hbase,hdfs,knox,yarn,solr,kafka,atlas,nifi,storm After restarting the Ranger Nifi is created in Ranger UI and then after enabling the NiFi plugin, NiFi service definition created in Ranger as cluster_nifi.
... View more
10-16-2018
07:52 PM
Hi, I am using HDP-2.6.0.3, Ambari-2.6.1.5 and HDP-3.0.2 I am managing both HDP and HDP on same Ambari. I am tryin gto enable Ranger NiFI plugin. I could see NiFi under plugin list in Ambari Ranger service but I am not seeing the NiFi service under Service manager list in Ranger UI Due to this, when I restart NiFi after enabling Nifi plugin, it is throwing below error Nifi Repository creation failed in Ranger admin The Ranger version is 0.7 and NiFi version is 1.2 Please help
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Ranger
07-11-2018
07:07 PM
Hi Chen, Can you please share the fulld eck of ansible script? My mail id - gvfriend2003@gmail.com Greatly appreciate your effort in creating teh ansible scripts for cluster automation. Thanks, Venkat
... View more
04-26-2017
06:07 PM
Hey, We are using HDP2.3.6. We are geeting below error when we configured Ranger to store audit on Solr. 2017-04-25 09:16:23,366 WARN [org.apache.ranger.audit.queue.AuditBatchQueue0]: provider.BaseAuditHandler (BaseAuditHandler.java:logFailedEvent(374)) - failed to log audit event: {"repoType":3,"repo":"hdpt01_hive","reqUser":"hadooptest","evtTime":"2017-04-25 09:16:21.124","access":"USE","resType":"@null","action":"SHOWDATABASES","result":1,"policy":6,"enforcer":"ranger-acl","sess":"06802e00-eda7-4bd2-a812-7e2ed2621e24","cliType":"HIVESERVER2","cliIP":"","reqData":"show schemas","agentHost":"hivehost","logType":"RangerAudit","id":"d8b3d307-0035-4613-a7ff-872fa1c46a9e","seq_num":0,"event_count":1,"event_dur_ms":0}
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hostname:8886/solr/ranger_audit: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/ranger_audit/update. Reason:
<pre> Authentication required</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html> The cluster is kerberized. Below error is seen when accessing teh ambari-infra-solr UI http://hostname:8886/solr/ GSSException: Failure unspecified at GSS-API level (Mechanism level: Specified version of key is not available (44))
... View more
04-26-2017
06:07 PM
Hi, I am getting the below error when I configured Ranger to use Solr for audits. We are using HDP-2.3.6 2017-04-26 09:29:22,353 [http-bio-6080-exec-2] ERROR org.apache.ranger.solr.SolrUtil (SolrUtil.java:79) - Error from Solr server.
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hostname:8886/solr/ranger_audit: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/ranger_audit/select. Reason:
<pre> Authentication required</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html> I see the below error when I access Solr UI http://hostname:8886/solr/ GSSException: Failure unspecified at GSS-API level (Mechanism level: Specified version of key is not available (44)) The cluster is kerberized
... View more
06-02-2016
06:40 PM
Any update on the above error?
... View more
05-27-2016
02:33 PM
Hi Sri, Thanks for the response. It did work. Thank you so much for your help. I accept this answer.
... View more
05-26-2016
09:12 PM
Hbase master log lp.bcbsa.com:2181 sessionTimeout=30000 watcher=master:160000x0, quorum=mdcthdpdas10lp.bcbsa.com:2181, baseZNode=/hbase-secure1
2016-05-26 16:11:43,494 WARN [main-SendThread(mdcthdpdas10lp.bcbsa.com:2181)] client.ZooKeeperSaslClient: Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit <princ>' (where <princ> is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t <keytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock.
2016-05-26 16:11:43,499 WARN [main-SendThread(mdcthdpdas10lp.bcbsa.com:2181)] zookeeper.ClientCnxn: SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2016-05-26 16:11:43,500 INFO [main-SendThread(mdcthdpdas10lp.bcbsa.com:2181)] zookeeper.ClientCnxn: Opening socket connection to server mdcthdpdas10lp.bcbsa.com/10.145.4.30:2181
2016-05-26 16:11:43,503 INFO [main-SendThread(mdcthdpdas10lp.bcbsa.com:2181)] zookeeper.ClientCnxn: Socket connection established to mdcthdpdas10lp.bcbsa.com/10.145.4.30:2181, initiating session
2016-05-26 16:11:43,509 INFO [main-SendThread(mdcthdpdas10lp.bcbsa.com:2181)] zookeeper.ClientCnxn: Session establishment complete on server mdcthdpdas10lp.bcbsa.com/10.145.4.30:2181, sessionid = 0x154a09cb3c7005a, negotiated timeout = 30000
2016-05-26 16:11:43,516 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2290)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2304)
Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /hbase-secure1
at org.apache.zookeeper.KeeperException.create(KeeperException.java:121)
... View more
05-26-2016
07:37 PM
Hi, I created a hive view instance but when I try to run a query it gives below error Caused by: org.apache.thrift.protocol.TProtocolException: Required field 'serverProtocolVersion' is unset! Struct:TOpenSessionResp(status:TStatus(statusCode:ERROR_STATUS, infoMessages:[*org.apache.hive.service.cli.HiveSQLException:Failed to validate proxy privilege of ambari-qa for gv07680:33:32, org.apache.hive.service.auth.HiveAuthFactory:verifyProxyAccess:HiveAuthFactory.java:359, org.apache.hive.service.cli.thrift.ThriftCLIService:getProxyUser:ThriftCLIService.java:731, org.apache.hive.service.cli.thrift.ThriftCLIService:getUserName:ThriftCLIService.java:367, org.apache.hive.service.cli.thrift.ThriftCLIService:getSessionHandle:ThriftCLIService.java:394, org.apache.hive.service.cli.thrift.ThriftCLIService:OpenSession:ThriftCLIService.java:297, org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession:getResult:TCLIService.java:1253, org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession:getResult:TCLIService.java:1238, org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39, org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39, org.apache.thrift.server.TServlet:doPost:TServlet.java:83, org.apache.hive.service.cli.thrift.ThriftHttpServlet:doPost:ThriftHttpServlet.java:171, javax.servlet.http.HttpServlet:service:HttpServlet.java:727, javax.servlet.http.HttpServlet:service:HttpServlet.java:820, org.eclipse.jetty.servlet.ServletHolder:handle:ServletHolder.java:565, org.eclipse.jetty.servlet.ServletHandler:doHandle:ServletHandler.java:479, org.eclipse.jetty.server.session.SessionHandler:doHandle:SessionHandler.java:225, org.eclipse.jetty.server.handler.ContextHandler:doHandle:ContextHandler.java:1031, org.eclipse.jetty.servlet.ServletHandler:doScope:ServletHandler.java:406, org.eclipse.jetty.server.session.SessionHandler:doScope:SessionHandler.java:186, org.eclipse.jetty.server.handler.ContextHandler:doScope:ContextHandler.java:965, org.eclipse.jetty.server.handler.ScopedHandler:handle:ScopedHandler.java:117, org.eclipse.jetty.server.handler.HandlerWrapper:handle:HandlerWrapper.java:111, org.eclipse.jetty.server.Server:handle:Server.java:349, org.eclipse.jetty.server.AbstractHttpConnection:handleRequest:AbstractHttpConnection.java:449, org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler:content:AbstractHttpConnection.java:925, org.eclipse.jetty.http.HttpParser:parseNext:HttpParser.java:857, org.eclipse.jetty.http.HttpParser:parseAvailable:HttpParser.java:235, org.eclipse.jetty.server.AsyncHttpConnection:handle:AsyncHttpConnection.java:76, org.eclipse.jetty.io.nio.SelectChannelEndPoint:handle:SelectChannelEndPoint.java:609, org.eclipse.jetty.io.nio.SelectChannelEndPoint$1:run:SelectChannelEndPoint.java:45, java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145, java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615, java.lang.Thread:run:Thread.java:745, *org.apache.hadoop.security.authorize.AuthorizationException:User: ambari-qa is not allowed to impersonate gv07680:0:-1], sqlState:08S01, errorCode:0, errorMessage:Failed to validate proxy privilege of ambari-qa for gv07680), serverProtocolVersion:null) I did kerberos setup for ambari user using ambari-server setup-security with ambari-qa as the ambari user. I did set up proxyuser settings in core-site.xml file using below configs hadoop.proxyuser.ambari-server.groups: * hadoop.proxyuser.ambari-server.hosts: *
We are using ambari-2.2.2 and HDP-2.3.0. Below are the configs for Hiev view instance Hive Authentication: auth=KERBEROS;principal=hive/_HOST@HADOOP.COM;hive.server2.proxy.user=gv07680 WebHDFS Username: gv07680 WebHDFS Authentication: auth=KERBEROS;proxyuser=ambari-qa@HADOOP.COM Scripts HDFS Directory*: /user/gv07680/hive/scripts HiveServer2 Thrift port*: 10001 HiveServer2 Http port*: 10001HiveServer2 Http path*: cliserviceHiveServer2 Transport Mode*: http WebHDFS FileSystem URI*: webhdfs://hostname:50070 There is no HA, so no HA related configs. But still I see the Failed to validate proxy privilege of ambari-qa for gv07680 error Below is the config for /etc/ambari-server/conf/krb5JAASLogin.conf com.sun.security.jgss.krb5.initiate { com.sun.security.auth.module.Krb5LoginModule required renewTGT=false doNotPrompt=true useKeyTab=true keyTab="/etc/security/keytabs/smokeuser.headless.keytab" principal="ambari-qa@BCBSA.COM"
storeKey=true
useTicketCache=false; };
Please advise.
... View more
Labels:
- Labels:
-
Apache Ambari
05-11-2016
04:36 PM
I restarted all services from Ambari and Hbase came up successfully without any problem. So, it clearly shows teh problem was because of not follwoing the order while restarting the services. But it would be great if someone let know the root cause for the above problem. Thanks in advance
... View more
05-11-2016
03:50 PM
Hi, I have upgraded Ambari recently to 2.2.2. After upgrade, Ambarai asked for restart of most of the components. I have restarted them but Hbase is restarted ahead of Zookeeper. Now, Hbase master fails to start with below error ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /hbase-secure Please help me on this. Thanks, Venkat
... View more
Labels:
- Labels:
-
Apache HBase
02-09-2016
08:50 PM
1 Kudo
Hi, Could you please let me know where is this amb_ranger_admin user name is stored? (Ambari/Ranger databases)?
... View more
01-28-2016
10:42 PM
1 Kudo
I tried to manually start/stop from problematic node and it was very quick. Below are the commands used /usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-regionserver/conf start/stop regionserver When the regionserver stopped manually, Ambari is notified about the change and reduced the no of regionserver count and increased the count when started the regionserver manually.
So, it is confirmed that the problem is only when starting/stopping from the Ambari but no logs related to the delay either in ambar-server.log or in ambari-agent.logs
... View more
01-28-2016
08:47 PM
I checked in server and agent logs but I did not see anything when I invoked the start/stop from Ambari from the problematic server
... View more
01-28-2016
07:17 PM
1 Kudo
Hi, We have a cluster of 10 servers. One worker server among them has the problem while starting/stopping services from Ambari. When I try to invoke any operations on any service (HDFS/HBASE/METRICS) from Ambari, the command is taking very long time to execute. I searched Ambari logs, Servcie logs but could not find any error. I tried to restart Ambari server and Ambari agent but still no luck. I had the same problem earlier but reinstalling ambari-agent fixed the issue but no luck now. I deleted host from cluster, cleaned total server and added back to server but still the same issue. Please advice. Thanks, Venkat
... View more
Labels:
- Labels:
-
Apache Ambari
01-27-2016
10:08 PM
Hi, Ranger usersync process not synching the AD users and throwing below error when a new user is added 27 Jan 2016 15:48:15 ERROR LdapUserGroupBuilder [UnixUserSyncThread] - sink.addOrUpdateUser failed with exception: POST https://server1:6182/service/users/default returned a response status of 403 Forbidden, for user: mthal, groups: It has picked the newly added user but could not add it to ranger users list. I have enabled the SSL for Ranger. Please advice Thanks, Venkat
... View more
Labels:
- Labels:
-
Apache Ranger
01-22-2016
08:53 PM
Yes..!! It solved the problem
... View more
01-22-2016
08:50 PM
Done and it addressed the problem. I somehow missed it but thanks once again Artem Ervits..!!
... View more
01-22-2016
08:16 PM
1 Kudo
Hey, I found the root cause for the problem. hive-site.xml file is not getting updated by Ambari for some reason but it is updated on another server where hive client is installed. I have copied the file to server where Hive master servcies are running and it works fine now. Now, another problem. Why is Ambari not updating the hive-site.xml file on master server?
... View more
01-22-2016
08:14 PM
Hi, I could not access Hive CLI with below error messages 16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding datanucleus.identifierFactory value null from jpox.properties with datanucleus1
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding javax.jdo.option.ConnectionURL value null from jpox.properties with jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding javax.jdo.option.DetachAllOnCommit value null from jpox.properties with true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding hive.metastore.integral.jdo.pushdown value null from jpox.properties with false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding datanucleus.storeManagerType value null from jpox.properties with rdbms
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding datanucleus.transactionIsolation value null from jpox.properties with read-committed
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding javax.jdo.PersistenceManagerFactoryClass value null from jpox.properties with org.datanucleus.api.jdo.JDOPersistenceManagerFactory
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: Overriding javax.jdo.option.Multithreaded value null from jpox.properties with true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.rdbms.useLegacyNativeValueStrategy = true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: hive.metastore.integral.jdo.pushdown = false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.autoStartMechanismMode = checked
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.option.Multithreaded = true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.identifierFactory = datanucleus1
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.transactionIsolation = read-committed
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.validateTables = false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.option.ConnectionURL = jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.option.DetachAllOnCommit = true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.option.NonTransactionalRead = true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.fixedDatastore = false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.validateConstraints = false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.option.ConnectionDriverName = org.apache.derby.jdbc.EmbeddedDriver
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.option.ConnectionUserName = APP
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.validateColumns = false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.cache.level2 = false
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.plugin.pluginRegistryBundleCheck = LOG
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.cache.level2.type = none
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: javax.jdo.PersistenceManagerFactoryClass = org.datanucleus.api.jdo.JDOPersistenceManagerFactory
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.autoCreateSchema = true
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.storeManagerType = rdbms
16/01/22 12:59:03 [main]: DEBUG metastore.ObjectStore: datanucleus.connectionPoolingType = BONECP
16/01/22 12:59:03 [main]: INFO metastore.ObjectStore: ObjectStore, initialize called
16/01/22 12:59:04 [main]: DEBUG bonecp.BoneCPDataSource: JDBC URL = jdbc:derby:;databaseName=/var/lib/hive/metastore/metastore_db;create=true, Username = APP, partitions = 1, max (per partition) = 10, min (per partition) = 0, idle max age = 60 min, idle test period = 240 min, strategy = DEFAULT
16/01/22 12:59:04 [BoneCP-pool-watch-thread]: ERROR bonecp.PoolWatchThread: Error in trying to obtain a connection. Retrying in 7000ms
java.sql.SQLException: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.setReadOnly(Unknown Source)
at com.jolbox.bonecp.ConnectionHandle.setReadOnly(ConnectionHandle.java:1324)
at com.jolbox.bonecp.ConnectionHandle.<init>(ConnectionHandle.java:262)
at com.jolbox.bonecp.PoolWatchThread.fillConnections(PoolWatchThread.java:115)
at com.jolbox.bonecp.PoolWatchThread.run(PoolWatchThread.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 13 more
Caused by: ERROR 25505: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. We are using Mysql for metastore but I see JDBC URl with Derby database. I am not sure if it can be ignored but highlighting if it gives any hint about the issue. Please help
... View more
Labels:
- Labels:
-
Apache Hive
01-15-2016
04:31 PM
1 Kudo
Hey Emaxwell, You are correct. The args should be after the configuration block is closed. The workflow is valid now Artem Ervits: Thank you so much for your help on this. The issue is resolved..!!
... View more
01-05-2016
03:36 PM
I did as you said. Below is the changed workflow. <workflow-app xmlns="uri:oozie:workflow:0.3" name="pdr-distcp-wf">
<start to="distcp-node"/>
<action name="distcp-node">
<distcp xmlns="uri:oozie:distcp-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode1}</name-node>
<arg>${SourceDir}</arg>
<arg>${TargetDir}</arg>
<configuration>
<property>
<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
<value>${nameNode1},${nameNode2}</value>
</property>
</configuration>
<arg>${SourceDir}</arg>
<arg>${TargetDir}</arg>
</distcp>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> I still see the below error Error: Invalid app definition, org.xml.sax.SAXParseException; lineNumber: 9; columnNumber: 19; cvc-complex-type.2.4.a: Invalid content was found starting with element 'configuration'. One of '{"uri:oozie:distcp-action:0.1":arg}' is expected. Please advice Thanks, Venkat
... View more
12-29-2015
07:10 PM
I changed the workflow as you suggested <workflow-app xmlns="uri:oozie:workflow:0.3" name="pdr-distcp-wf">
<start to="distcp-node"/>
<action name="distcp-node">
<distcp xmlns="uri:oozie:distcp-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode1}</name-node>
<arg>${SourceDir}</arg>
<arg>${TargetDir}</arg>
<configuration>
<property>
<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
<value>${nameNode1},${nameNode2}</value>
</property>
</configuration>
<arg>${SourceDir}</arg>
<arg>${TargetDir}</arg>
</distcp>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> But still the same error Error: Invalid app definition, org.xml.sax.SAXParseException; lineNumber: 9; columnNumber: 19; cvc-complex-type.2.4.a: Invalid content was found starting with element 'configuration'. One of '{"uri:oozie:distcp-action:0.1":arg}' is expected
... View more
12-29-2015
03:54 PM
Okay...Below is my workflow where I have the distcp block closed as well <distcp xmlns="uri:oozie:distcp-action:0.2"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode1}</name-node> <arg>${SourceDir}</arg> <arg>${TargetDir}</arg> <configuration> <property> <name>oozie.launcher.mapreduce.job.hdfs-servers</name> <value>${nameNode1},${nameNode2}</value> </property> </configuration> </distcp> I still get the same error Error: Invalid app definition, org.xml.sax.SAXParseException; lineNumber: 9; columnNumber: 19; cvc-complex-type.2.4.a: Invalid content was found starting with element 'configuration'. One of '{"uri:oozie:distcp-action:0.2":arg}' is expected.
... View more
12-24-2015
06:14 PM
I closed it. You can see it below to </configuration>
... View more
12-18-2015
04:47 PM
2 Kudos
Hi, I am trying to submit Oozie workflow with distcp-action but getting below error when I validate the workflow oozie validate pdr-distcp-wf.xml
Error: Invalid app definition, org.xml.sax.SAXParseException; lineNumber: 9; columnNumber: 20; cvc-complex-type.2.4.a: Invalid content was found starting with element 'configuration'. One of '{"uri:oozie:distcp-action:0.2":arg}' is expected. Please find the workflow that I am using below.... <workflow-app xmlns="uri:oozie:workflow:0.2" name="pdr-distcp-wf">
<start to="distcp-node"/>
<action name="distcp-node">
<distcp xmlns="uri:oozie:distcp-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode1}</name-node>
<arg>${SourceDir}</arg>
<arg>${TargetDir}</arg>
<configuration>
<property>
<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
<value>${nameNode1},${nameNode2}</value>
</property>
</configuration>
</distcp>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> Please help me out
... View more
Labels:
- Labels:
-
Apache Oozie
12-18-2015
03:58 PM
I think the problems is, though I gave 777 over /apps/falcon/backupCluster/staging, when I tried creating directory under /apps/falcon/backupCluster/staging/falcon/workflows/feed as falcon user, it is created with 755 permissions as below [falcon@hostname ~]$ hdfs dfs -mkdir /apps/falcon/backupCluster/staging/falcon/workflows/feed/test1 hdfs dfs -ls /apps/falcon/backupCluster/staging/falcon/workflows/feed drwxr-xr-x - falcon hdfs 0 2015-12-18 09:55 /apps/falcon/backupCluster/staging/falcon/workflows/feed/test1 The directory is not created with 777 permission (umask 022). Do you think it could be the reason?
... View more