Member since
10-08-2019
12
Posts
0
Kudos Received
0
Solutions
12-16-2021
07:33 AM
hi @smruti . this is the response. [root@datap01 ~]# yarn logs -applicationId application_1639152705224_0018 > app_log.out WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS. INFO client.RMProxy: Connecting to ResourceManager at srvbigdatap01.agbar.ga.local/10.200.14.72:8032 File /tmp/logs/hdfs/logs/application_1639152705224_0018 does not exist. Can not find any log file matching the pattern: [ALL] for the application: application_1639152705224_0018 Can not find the logs for the application: application_1639152705224_0018 with the appOwner: hdfs 😕
... View more
12-10-2021
08:52 AM
Hi I have some issue with my queries in Hive, they are very simple queries. One day start to fail and this is the error : Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Could not find status of job:job_1639152705224_0018 INFO : Kill Command = /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/bin/hadoop job -kill job_1639152705224_0018
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
INFO : 2021-12-10 17:35:35,913 Stage-1 map = 0%, reduce = 0%
INFO : 2021-12-10 17:35:43,176 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 7.34 sec
ERROR : Ended Job = job_1639152705224_0018 with exception 'java.io.IOException(Could not find status of job:job_1639152705224_0018)'
java.io.IOException: Could not find status of job:job_1639152705224_0018
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:300)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:574)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:454)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1334)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256)
at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Could not find status of job:job_1639152705224_0018
INFO : Completed executing command(queryId=hive_20211210173528_ff76c3df-a33b-41d0-b328-460c9b65deda); Time taken: 21.669 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Could not find status of job:job_1639152705224_0018 (state=08S01,code=1)
Closing: 0: jdbc:hive2://datap01.agbar.ga.local:10000/default Thanks
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
07-28-2021
05:20 AM
I get a problem after install oozie by cloudera distribution. It didn't create the folder that it need. SERVER[srvbigdatap01.agbar.ga.local] USER[hdfs] GROUP[-] TOKEN[] APP[test] JOB[0000000-210728110211255-oozie-oozi-W] ACTION[0000000-210728110211255-oozie-oozi-W@shell-e23d] Error starting action [shell-e23d]. ErrorType [FAILED], ErrorCode [Failed to add action specific sharelib], Message [File /user/oozie/share/lib does not exist.] org.apache.oozie.action.ActionExecutorException: File /user/oozie/share/lib does not exist. any idea why? Thanks
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Hue
07-28-2021
04:54 AM
Hi @ask_bill_brooks .. i've change de clusterid en xml configuration files, and works good. thanks
... View more
07-07-2021
09:26 AM
Hi I get this error when i try to create a new cluster or add a new host to an exists cluster.. It's the basic way to do it through the cloudera manager, but I can't figure out how to fix it, all nodes can access the internet and have been working before in another cluster that was removed This is an screenshot the agent log in the node show me this: Traceback (most recent call last): File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/agent.py", line 1390, in _send_heartbeat self.cfg.master_port) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 469, in __init__ self.conn.connect() File "/usr/lib64/python2.7/httplib.py", line 833, in connect self.timeout, self.source_address) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err timeout: timed out [07/Jul/2021 18:21:00 +0000] 1352 MainThread agent WARNING Long HB processing time: 45.0577571392 [07/Jul/2021 18:21:00 +0000] 1352 MainThread agent WARNING Delayed HB: 40s since last [07/Jul/2021 18:21:45 +0000] 1352 MainThread agent ERROR Heartbeating to 10.200.0.21:7182 failed. Traceback (most recent call last): File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/agent.py", line 1390, in _send_heartbeat self.cfg.master_port) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 469, in __init__ self.conn.connect() File "/usr/lib64/python2.7/httplib.py", line 833, in connect self.timeout, self.source_address) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err timeout: timed out [07/Jul/2021 18:21:45 +0000] 1352 MainThread agent WARNING Long HB processing time: 45.0572810173 [07/Jul/2021 18:21:45 +0000] 1352 MainThread agent WARNING Delayed HB: 40s since last and in manager server 2021-07-07 18:13:59,215 INFO scm-web-8827:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogress 2021-07-07 18:13:59,217 INFO scm-web-8827:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogress, Status:200 2021-07-07 18:19:04,139 INFO scm-web-8616:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogress 2021-07-07 18:19:04,141 INFO scm-web-8616:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogress, Status:200 2021-07-07 18:19:36,858 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Reaped total of 0 deleted commands 2021-07-07 18:19:36,860 INFO StaleEntityEviction:com.cloudera.server.cmf.StaleEntityEvictionThread: Found no commands older than 2019-07-08T16:19:36.858Z to reap. 2021-07-07 18:19:36,861 INFO StaleEntityEviction:com.cloudera.server.cmf.node.NodeScannerService: Reaped 0 requests. 2021-07-07 18:19:36,861 INFO StaleEntityEviction:com.cloudera.server.cmf.node.NodeConfiguratorService: Reaped 0 requests. 2021-07-07 18:20:00,873 INFO ScmActive-0:com.cloudera.server.cmf.components.ScmActive: (119 skipped) ScmActive completed successfully. 2021-07-07 18:20:43,985 WARN CMMetricsForwarder-0:com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder: (29 skipped) Not forwarding metrics to SMON since it's status is STOPPED
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Manual Installation
01-19-2021
05:41 AM
Hi @MattWho i just kill the process that was using nifi in this particular node and then restarted the service. That's all. Thanks
... View more
01-12-2021
04:51 AM
I have problems in a nifi node, with the error that is below. I already removed it from the cluster and reinstalled it but it still keeps the error. 😞 HTTP ERROR 500 Problem accessing /nifi/. Reason: Server Error Caused by: javax.servlet.ServletException: org.apache.jasper.JasperException: java.lang.ClassNotFoundException: org.apache.jsp.WEB_002dINF.pages.canvas_jsp
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:724)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
at java.lang.Thread.run(Thread.java:748) Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
07-31-2020
07:03 AM
It worked fine ... thank you very much
... View more
10-15-2019
04:57 AM
Hi @MattWho, I'll tell you what I'm doing 1 demolition of node XX that is disconnected from the cluster by different flow.xml 2 copy flow-xml.gz of the node that is fine to node XX 3 in the nifipropoerties file setting nifi.sensitive.props.key of the node disconnected by the nifi.sensitive.props.key of the node that is connected 4 subtract from nodeXX but every time I restart a node it generates a new folder this is an example: drwxr-x--x 6 nifi nifi 520 oct 15 10:45 1216-nifi-NIFI_NODE drwxr-x--x 6 nifi nifi 520 oct 15 10:45 1211-nifi-NIFI_NODE drwxr-x--x 6 nifi nifi 520 oct 15 10:33 1201-nifi-NIFI_NODE drwxr-x--x 3 zookeeper zookeeper 360 oct 15 10:30 1207-zookeeper-server drwxr-x--x 3 zookeeper zookeeper 380 oct 15 10:30 1022-zookeeper-server drwxr-x--x 6 nifi nifi 520 oct 15 10:29 1196-nifi-NIFI_NODE drwxr-x--x 6 nifi nifi 520 oct 15 10:08 1175-nifi-NIFI_NODE The nifi.properties is in every folder like XXXX-nifi-NIFI_NODE I don't know how to stop this folder creation, because it doesn't have sense change de key in the last folder when node is down Thanks
... View more
10-08-2019
09:46 AM
I have that problem and I have followed the steps of copying the flow.xml.gz to the nodes that do not connect, but after copying them, I have problems to decrypt since a proper key is generated for the gz of that moment
this is the log
2019-10-08 16:59:57,735 ERROR org.apache.nifi.web.server.JettyServer: Unable to load flow due to: org.apache.nifi.controller.serialization.FlowSynchronizationException: org.apache.nifi.encrypt.EncryptionException: There was a problem decrypting a sensitive flow configuration value. Check that the nifi.sensitive.props.key value in nifi.properties matches the value used to encrypt the flow.xml.gz file org.apache.nifi.controller.serialization.FlowSynchronizationException: org.apache.nifi.encrypt.EncryptionException: There was a problem decrypting a sensitive flow configuration value. Check that the nifi.sensitive.props.key value in nifi.properties matches the value used to encrypt the flow.xml.gz file at org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:478) at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1296) at org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:88) at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:812) at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:476) at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1009) at org.apache.nifi.NiFi.<init>(NiFi.java:158) at org.apache.nifi.NiFi.<init>(NiFi.java:72) at org.apache.nifi.NiFi.main(NiFi.java:297) Caused by: org.apache.nifi.encrypt.EncryptionException: There was a problem decrypting a sensitive flow configuration value. Check that the nifi.sensitive.props.key value in nifi.properties matches the value used to encrypt the flow.xml.gz file at org.apache.nifi.controller.serialization.FlowFromDOMFactory.decrypt(FlowFromDOMFactory.java:552)
I don`t what to do whit this. thank
... View more
Labels:
- Labels:
-
Apache NiFi