Member since
05-10-2016
298
Posts
35
Kudos Received
0
Solutions
01-22-2020
07:58 AM
Thanks for you answer. But I think it is not a configuration issue. Probably need to rewrite new rule. I don't know how to do it.
... View more
01-20-2020
06:50 AM
Hi
I need your help, I used knox-1.3.0 and nifi 1.9.2
I able to connect and create some dataflows with from knox connection :
https://knox-host:8443/gateway/default/nifi-app/nifi/
But it failed when I'm trying to view the content of flowfile, the URL used by knox seems to be wrong.
https://knox-host:8443/nifi-content-viewer/?ref=https%3A%2F%2FXXXX%3A8443%2Fgateway%2Fs3632tos%2Fnifi-app%2Fnifi-api%2Fflowfile-queues%2F5cfc80eb-e61a-3523-8790-b7609e13b483%2Fflowfiles%2F84979bbc-5952-4455-8427-db8ed74f214b%2Fcontent
Somebody already has this issue ?
regards
... View more
Labels:
08-01-2019
07:02 AM
The error message was not very clear but it works for HDF 1.9.2. Thanks a lot!
... View more
07-31-2019
11:44 AM
Hi, I have also tried with NIFI 1.5 and it works to.
... View more
07-31-2019
11:25 AM
Hi all, I have an issue to connect to the SQL Server with DBCPConnectionPool 1.9.2 but it is worked with DBCPConnectionPool 1.2 Do you know if there are some modifications between 1.2 and 1.9.2 concerning DBCPConnectionPool ? Connection parameter : jdbc:jtds:sqlserver://${host}:${port};databaseName=xxxxxx;user=xxxxxx;password=xxxxxx net.sourceforge.jtds.jdbc.Driver /var/dlk/squad/squad-jars/jtds-1.3.1.jar Thanks for helps. Here error on NIFI 1.9.2 2019-07-31 12:53:42,483 ERROR [Timer-Driven Process Thread-12] o.a.n.p.standard.ListDatabaseTables ListDatabaseTables[id=db1439dc-aa6f-385b-be8f-66ee7faf84f4] ListDatabaseTables[id=db1439dc-aa6f-385b-be8f-66ee7faf84f4] failed to process session due to java.lang.AbstractMethodError; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
java.lang.AbstractMethodError: null
at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:470)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor537.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:87)
at com.sun.proxy.$Proxy304.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.ListDatabaseTables.onTrigger(ListDatabaseTables.java:230)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-07-31 12:53:42,483 WARN [Timer-Driven Process Thread-12] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ListDatabaseTables[id=db1439dc-aa6f-385b-be8f-66ee7faf84f4] due to uncaught Exception: java.lang.AbstractMethodError
java.lang.AbstractMethodError: null
at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:470)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor537.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:87)
at com.sun.proxy.$Proxy304.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.ListDatabaseTables.onTrigger(ListDatabaseTables.java:230)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
... View more
Labels:
04-12-2018
12:37 PM
Hi all, We have opened an ticket to hortonwork support but don't have any clues about this message, we have a lot of messages : 2018-04-12 14:18:10,277 WARN [Replicate Request Thread-42] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Response time from node1-nifi:9091 was slow for each of the last 3 requests made. To see more information about timing, enable DEBUG logging for org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator
We have set some parameters : nifi.cluster.node.connection.timeout=60 sec
nifi.cluster.node.read.timeout=60 sec
nifi.cluster.node.protocol.max.threads=100
nifi.cluster.node.protocol.port=9088
nifi.cluster.node.protocol.threads=80
nifi.web.jetty.threads=600
After restart the nifi, all is ok for a few minutes before the message come back again. I don't use custom processors, and CPU use is around 5%, host has 32Go and Memory HEAD is set to minimum 8GB and max 16GB. NIFI 1.2 Any helps are welcome.
... View more
Labels:
12-06-2017
03:13 PM
Hi, thank for your tuto. Do you know if PutParquet could to deal with "date", "timestamp" type or it depends avro Schema ? So the answer is NO..
... View more
12-04-2017
01:27 PM
@Bjorn Olsen: thanks for help, in my controller, i've forgotten "date format". It works as except. thanks
... View more
12-04-2017
10:42 AM
@bjorn : what is the version of nifi, have you used for demo ? I'm using nifi 1.2.0 and the timestamp with "long" not works
... View more
10-20-2017
01:59 PM
@Shu : thanks. I have tried with espace caracter, but i've sent it to splitAvro and got the issue with avro.
... View more
10-20-2017
12:47 PM
Hi all, I've used SelectHiveSQL to make a select but I got an issue it seems that the column name "delta-1", it was not correctly set on the select command.
2017-10-20
14:28:05,805 ERROR [Timer-Driven Process Thread-7]
o.a.nifi.processors.hive.SelectHiveQL SelectHiveQL[id=d3043004]
Unable to execute HiveQL select query select country,year,delta-1 from
table1 for StandardFlowFileRecord[uuid=d1e4662,claim=,offset=0,name=2846304697532889,size=0]
due to org.apache.nifi.processor.exception.ProcessException: org.apache.hive.service.cli.HiveSQLException:
Error while compiling statement:
FAILED:SemanticException [Error 10004]: Line 1:33 Invalid table alias or column
reference 'delta': (possible column names are: country, year, delta-1, ); routing to failure:
org.apache.nifi.processor.exception.ProcessException:
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement:
thanks for helps
... View more
Labels:
09-30-2017
02:55 PM
Yes. It resolved. I've deleted processor and addition a New one.
... View more
09-27-2017
04:43 PM
@wynner, the file will to be between 1 and 3 by day
... View more
09-22-2017
02:40 PM
Hi all, I use ListSFTP+FETCHSFTP to get the files (the files could be one file or 3 files or 5 files by day) and put its with putHDFS. A file _SUCCESS needs to be create if all files have put on HDFS. The question how can I check if all files have been put on HDFS before create the empty _SUCCESS file ? regards
... View more
Labels:
09-12-2017
09:35 AM
hi all, Somebody could explain what is mean the value "percentCompleted" when I make a POST /nifi-api/provenance ? Must I wait percentCompleted egal 100% before make a GET /nifi-api/provenance/${id} ? Here percentCompleted = 33% {"provenance":{"id":"a0e66f97-b318-1fa4-0000-0000241111aa","uri":"https://nifi001:9443/nifi-api/provenance/a0e66f97-b318-1fa4-0000-0000241111aa","submissionTime":"09/12/2017 11:29:16.606 CEST","expiration":"09/12/2017 11:59:16.606 CEST","percentCompleted":33,"finished":false,"request":{"searchTerms":{"Component ID":"7176ddeb-015e-1000-ffff-ffff8dee922b","Event Type":"DROP"},"startDate":"09/12/2017 10:00:00 CEST","endDate":"09/12/2017 11:59:59 CEST","maxResults":1000},"results":{"provenanceEvents":[],"total":"0","totalCount":0,"generated":"11:29:16 CEST","oldestEvent":"09/11/2017 12:00:06 CEST","bash-4.1# curl -v --compress -H Content-type:application/json --cacert nifi-cert.pem --cert ./nifi001.p12 -XGET https://nifi001:9443/nifi-api/provenance/a0e66f97-b318-1fa4-0000-0000241111aa
thanks
... View more
Labels:
09-04-2017
12:21 PM
Hi all, I Got this error, but i don't know where is an issue in listSFTP processor The command line works sftp -o StrictHostKeyChecking=no -oIdentityFile=/home/xxx/id_dsa XXXX@x.x.33.5 In the processor listFSTP Username : XXXX
Strict Host Key Checking : true or false (same errors)
Host Key File : /home/xxx/id_dsa
And this error : 2017-09-04 14:04:16,655 ERROR [Timer-Driven Process Thread-2] o.a.nifi.processors.standard.ListSFTP ListSFTP[id=3c8b4625-015e-1000-ffff-ffffda2b4cb0] Failed to perform listing on remote host due to java.io.IOException: Failed to obtain connection to remote host due to com.jcraft.jsch.JSchException: Auth fail: {}
java.io.IOException: Failed to obtain connection to remote host due to com.jcraft.jsch.JSchException: Auth fail
at org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:447)
at org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:184)
at org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:148)
at org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:104)
at org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:340)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.jcraft.jsch.JSchException: Auth fail
at com.jcraft.jsch.Session.connect(Session.java:519)
at com.jcraft.jsch.Session.connect(Session.java:183)
at org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:433)
... 16 common frames omitted
thanks for yours helps
... View more
Labels:
09-01-2017
01:31 PM
@matt, by default the api get a query from cluster except if we use clusternodeid in the parameter of query, right ?
... View more
08-24-2017
03:51 PM
Hi all, I'm trying use invokeHTTP to get a digest and token from an API, here my curl command : curl --silent -ik --url https://${OKAPI_HOST}/oauth/access_token -H "Authorization: Bearer $(base64 -w 0 < ${KEYTAB_FILE})" -d grant_type=client_credentials --data-urlencode scope=${SCOPE} how send parameter to invokeHTTP ? I'm thinking use updateAttribute to set scope, Authorization but it don't work. thanks
... View more
Labels:
08-24-2017
12:33 PM
@kdoran: With the web-ui I got 18 result with my search data provenance, but when i'll doing with python script i got only 5 (only has the result from node003)
... View more
08-23-2017
10:21 AM
Hi all, I Have a cluster NIFI with 3 node, when I'm making GET dataprovenance with python script, only nifi-node003 sent me the result. So can't receive dataprovenance from nifi-node001 and nifi-node002 What's wrong ? thanks for helps "{\"provenance\":{\"request\":{\"maxResults\":1000,\"startDate\":\"08/23/2017 03:00:00 CEST\",\"endDate\":\"08/23/2017 03:59:59 CEST\",\"searchTerms\":{\"EventType\":\"SEND\",\"ProcessorID\":\"0dd63d71-8620-1ce1-be9d-587c5f6ec679\"}}}}"
conn=httplib.HTTPSConnection(clusterNode, 9443, key_file=None, cert_file="/var/opt/hosting/nifi/conf/nifi-001.pem")
headers = {"Content-Type": "application/json", "Accept": "application/json, text/javascript"}
conn.request("POST", "/nifi-api/provenance", json.dumps(params), headers)
response = conn.getresponse()
result = json.loads(response.read())
conn.close()
conn.request("GET", "/nifi-api/provenance/" + result['provenance']['id'])
responseGet = conn.getresponse()
resultGet = json.loads(responseGet.read())
conn.request("DELETE", "/nifi-api/provenance/" + resultGet['provenance']['id'])
res = conn.getresponse()
conn.close()
... View more
Labels:
08-23-2017
07:25 AM
I've same error if I've launch multiple POST to query the dataprovenance ID 08/23/2017 09:22:02 CEST: Node Status changed from CONNECTED to
DISCONNECTED due to Failed to process request POST /nifi-api/provenance
... View more
08-23-2017
07:06 AM
Hi all, Do you know why my node disconnected when I'm doing some POST/GET/DELETE with api ? params = {"provenance":{"request":{"maxResults":1000,"summarize":"true","incrementalResults":"false","startDate":"08/23/2017 00:00:00 CEST","endDate":"08/23/2017 23:59:59 CEST","clusterNodeId":"dae75344-81ca-
4251-81b3-3476e9750f3f","searchTerms":{"EventType":"SEND","ProcessorID":"0dd63d71-8620-1ce1-be9d-587c5f6ec679"}}}}
#This curl to receice provenance ID
curl -vv -H "Content-type: application/json" -XPOST 'https://nifi001:9443/nifi-api/provenance' --data-binary $params
#this curl to get provenance
curl -vv -H "Content-type: application/json" -XGET 'https://nifi001:9443/nifi-api/provenance/0dddacc8-015e-1000-0000-000025863d27'
# this curl for delete
curl -vv -H "Content-type: application/json" -XDELETE 'https://nifi001:9443/nifi-api/provenance/0dddacc8-015e-1000-0000-000025863d27'
08/23/2017 08:53:43 CEST: Node Status changed from CONNECTED to
DISCONNECTED due to Failed to process request DELETE
/nifi-api/provenance/0dddacc8-015e-1000-0000-000025863d27 thanks
... View more
Labels:
08-21-2017
01:28 PM
Hi all, I'm try running a sqoop job and fall to killed job but i don't know why ? Here application log : 2017-08-21 14:21:15,179 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #4 tokens and #1 secret keys for NM use for launching container
2017-08-21 14:21:15,179 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 5
2017-08-21 14:21:15,179 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2017-08-21 14:21:15,563 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1501688338199_102303_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2017-08-21 14:21:15,567 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_e71_1501688338199_102303_01_000002 taskAttempt attempt_1501688338199_102303_m_000000_0
2017-08-21 14:21:15,568 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1501688338199_102303_m_000000_0
2017-08-21 14:21:15,569 INFO [ContainerLauncher #0] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : host049:45454
2017-08-21 14:21:15,629 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1501688338199_102303_m_000000_0 : 13562
2017-08-21 14:21:15,630 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1501688338199_102303_m_000000_0] using containerId: [container_e71_1501688338199_102303_01_000002 on NM: [host049:45454]
2017-08-21 14:21:15,634 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1501688338199_102303_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2017-08-21 14:21:15,634 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1501688338199_102303_m_000000 Task Transitioned from SCHEDULED to RUNNING
2017-08-21 14:21:15,981 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1501688338199_102303: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:239616, vCores:77> knownNMs=57
2017-08-21 14:21:16,767 INFO [Socket Reader #1 for port 44108] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for user001 (auth:SIMPLE)
2017-08-21 14:21:16,774 INFO [Socket Reader #1 for port 44108] SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for user001 (auth:TOKEN) for protocol=interface org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB
2017-08-21 14:21:16,826 INFO [IPC Server handler 0 on 44108] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Kill job job_1501688338199_102303 received from user001 (auth:TOKEN) at 10.99.224.106
2017-08-21 14:21:16,828 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1501688338199_102303Job Transitioned from RUNNING to KILL_WAIT
2017-08-21 14:21:16,842 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1501688338199_102303_m_000000 Task Transitioned from RUNNING to KILL_WAIT
2017-08-21 14:21:16,850 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1501688338199_102303_m_000000_0 TaskAttempt Transitioned from RUNNING to KILL_CONTAINER_CLEANUP
2017-08-21 14:21:16,850 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container container_e71_1501688338199_102303_01_000002 taskAttempt attempt_1501688338199_102303_m_000000_0
2017-08-21 14:21:16,851 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1501688338199_102303_m_000000_0
2017-08-21 14:21:16,851 INFO [ContainerLauncher #1] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: Opening proxy : host049:45454
2017-08-21 14:21:16,866 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1501688338199_102303_m_000000_0 TaskAttempt Transitioned from KILL_CONTAINER_CLEANUP to KILL_TASK_CLEANUP
2017-08-21 14:21:16,867 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: TASK_ABORT
2017-08-21 14:21:16,873 WARN [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://clusterbigdata/user/user001/oozie-oozi/0005541-170818100252684-oozie-oozi-W/sqoop-d8b5--sqoop/output/_temporary/1/_temporary/attempt_1501688338199_102303_m_000000_0
2017-08-21 14:21:16,879 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1501688338199_102303_m_000000_0 TaskAttempt Transitioned from KILL_TASK_CLEANUP to KILLED
2017-08-21 14:21:16,889 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1501688338199_102303_m_000000 Task Transitioned from KILL_WAIT to KILLED
2017-08-21 14:21:16,892 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2017-08-21 14:21:16,892 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1501688338199_102303Job Transitioned from KILL_WAIT to KILL_ABORT
2017-08-21 14:21:16,893 INFO [CommitterEvent Processor #2] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_ABORT
2017-08-21 14:21:16,904 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1501688338199_102303Job Transitioned from KILL_ABORT to KILLED
2017-08-21 14:21:16,904 INFO [Thread-67] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2017-08-21 14:21:16,905 INFO [Thread-67] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
Thanks for helps.
... View more
- Tags:
- Hadoop Core
- Sqoop
- YARN
Labels:
07-27-2017
03:21 PM
Hi all,
Anybody already used google ControllerService and GCSBucket ?
I don't know how use it to list bucket.
Without NIFI, i've used gsutil and a configuration file how has these parameters gs_oauth2_refresh_token
proxy
proxy_port Any help welcome
... View more
Labels:
07-20-2017
09:11 AM
hi all, I'm using NIFI 1.3.0. It seems a bug on segmentContent processor, i can't to set the value. Do you know what's happen ? thanks
... View more
Labels:
07-19-2017
09:40 AM
@Pierre Villard We have the same performance issue, one processor reachs the limit around 200 mgs / second with 2 or 10 Concurrent Tasks. If we add more processors we can multiply the consume with 2 processors ~400mgs/second and with 5 processors 900mgs/second. Each message size is 2kb # JVM memory settings
java.arg.2=-Xms4g
java.arg.3=-Xmx4g We are 3 nodes in the cluster (8cpu and 16G memory) There is something wrong on this processor ?
... View more
07-03-2017
06:33 PM
1 Kudo
@timothy Il have to make a curl to retrieve the data from an API. And to connect to this API I need first to make a curl to retrieve token and digest, these parameters do not exist in Nifi http processor.
... View more
07-03-2017
03:10 PM
1 Kudo
It seems the same problem : https://community.hortonworks.com/questions/44082/executestreamcommand-hangs-when-executing-hive-scr.html but in my script i'm doing a simple curl http://url/
... View more
07-03-2017
02:51 PM
1 Kudo
Hi all, It seems that executeStreamCommand don't delete flowFile. The script bash worked but the queue will not empty after successful execution of script
... View more
Labels: