Member since
01-11-2019
10
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
458 | 07-30-2019 02:55 PM |
07-31-2019
07:22 PM
Hi, We're facing a known issue in Kafka 2.1 provided with HDF 3.4.1. and need to upgrade to kafka 2.2. https://issues.apache.org/jira/browse/KAFKA-7697 How can we do the manual upgrade? Should we follow the normal kafka upgrade documentation or is there a way to register the version via HDF?
... View more
Labels:
07-31-2019
01:41 PM
Hi, On a clean HDF install, LogSearch is not working: logfeeder.log: Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://services.hdfXXXXX:8886/solr/hadoop_logs_shard1_replica_n1: Expected mime type application/octet-stream but$ <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/> <title>Error 404 Not Found</title> </head> <body><h2>HTTP ERROR 404</h2> <p>Problem accessing /solr/hadoop_logs_shard1_replica_n1/update. Reason: <pre> Not Found</pre></p> </body> </html> logsearch.log 2019-07-31 13:40:28,566 [pool-5-thread-1] ERROR org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler (TaskUtils.java:95) - Unexpected error occurred in scheduled task. java.lang.RuntimeException: Invalid version (expected 2, but 60) or the data in not in 'javabin' format at org.apache.solr.common.util.JavaBinCodec.initRead(JavaBinCodec.java:186) SOLR seems to be working just fine, any ideas?
... View more
Labels:
07-30-2019
02:55 PM
1 Kudo
Turns out , the following needs to be executed on mySQL: INSERT INTO schema_lock SELECT CONCAT('schema_metadata_info-', schema_metadata_info.name) AS name, schema_metadata_info.timestamp FROM schema_metadata_info WHERE CONCAT('schema_metadata_info-', schema_metadata_info.name) NOT IN (SELECT name FROM schema_lock);
... View more
07-30-2019
12:23 PM
Hi , We recently upgrade our DEV and PROD environments from HDF 3.3 to HDF 3.4.1.1 after following upgrade instructions on a HDF-only cluster. After the upgrade, schema registry is giving the following exceptions in both environments for clients using the confluent API: ERROR [2019-07-30 11:17:42.014] [dw-94 - POST /api/v1/confluent/subjects/EgressLiveAllCXXX1-value/versions] c.h.r.s.w.ConfluentSchemaRegistryCompatibleResource - Encountered error while adding subject [EgressLiveAllCowBehavior1-value] java.lang.RuntimeException: Failed to obtain write lock : schema_metadata_info-EgressLiveAllCXXXX1-value in 120 sec at com.hortonworks.registries.schemaregistry.DefaultSchemaRegistry.lockSchemaMetadata(DefaultSchemaRegistry.java:520) at com.hortonworks.registries.schemaregistry.DefaultSchemaRegistry.addSchemaVersion(DefaultSchemaRegistry.java:489) at com.hortonworks.registries.schemaregistry.webservice.ConfluentSchemaRegistryCompatibleResource.lambda$registerSchemaVersion$1(ConfluentSchemaRegistryCompatibleResource.java:322) at com.hortonworks.registries.schemaregistry.webservice.BaseRegistryResource.handleLeaderAction(BaseRegistryResource.java:77) at com.hortonworks.registries.schemaregistry.webservice.ConfluentSchemaRegistryCompatibleResource.registerSchemaVersion(ConfluentSchemaRegistryCompatibleResource.java:307) at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99) ...... .... I've tried flusing the mySQL tables, restarted the database and the SR services but no luck As this is impacting production, any help would be highly appreciated.
... View more
Labels:
01-22-2019
04:40 PM
Thanks @kkawamura we're testing that approach now!
... View more
01-21-2019
10:47 AM
1 Kudo
Hi there, I'm trying to use the enforce order processor and use epoch to be the order attribute. However, each time I tried to do this, i got the following error: Failed to parse order attribute due to java.lang.NumberFormatException: For input string: "103317120000
e7fe73b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1548066389226-175, container=default, section=175], offset=755611, length=200],offset=0,name=40b54e6a-32c6-4065-8c79-ccff66313b2f.avro,size=200]: Looking into the code of the enforceOrder attribute, it turns out that the processor is using the Integer type, and its a known issue / fact that current epochs' wont fit in the JAVA 32bit integer. orderAttribute = processContext.getProperty(ORDER_ATTRIBUTE).getValue();
waitTimeoutMillis = processContext.getProperty(WAIT_TIMEOUT).asTimePeriod(TimeUnit.MILLISECONDS);
getOrder = flowFile -> Integer.parseInt(flowFile.getAttribute(orderAttribute)); Question: Why did the team decide on int? Can this be changed to Long as that would be the way to go? Is there a different strategy to use the epoch with this processor?
... View more
Labels:
01-21-2019
10:41 AM
The solution worked! I had some strange error my side that disappeared after a clean setup of the flow. Thanks @Shu!
... View more
01-13-2019
02:11 PM
This really helps--and its the solution that should work. However, every time i give the SQL SELECT statement in the flowfile content (using generateflowfile processor), the ExecuteSQL processor throws an error. The database.name attribute is set and the select statement is a simple select a from table the error is below. any idea what i'm missing? [Timer-Driven Process Thread-5] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=15613b27-66e0-101a-ab78-86906b96182b] ExecuteSQL[id=15613b27-66e0-101a-ab78-86906b96182b] faile
d to process session due to org.apache.nifi.processor.exception.FlowFileHandlingException: StandardFlowFileRecord[uuid=ed4af901-f621-4aab-b7db-aad8c72e41ca,claim=StandardContentClaim [resourceClaim=StandardResou
rceClaim[id=1547378310570-136, container=default, section=136], offset=890898, length=10357],offset=9267,name=ed4328e2-1284-4ff3-b252-f7de7c92a94e,size=1090] is not known in this session (StandardProcessSession[
id=315712]); Processor Administratively Yielded for 1 sec: org.apache.nifi.processor.exception.FlowFileHandlingException: StandardFlowFileRecord[uuid=ed4af901-f621-4aab-b7db-aad8c72e41ca,claim=StandardContentCla
im [resourceClaim=StandardResourceClaim[id=1547378310570-136, container=default, section=136], offset=890898, length=10357],offset=9267,name=ed4328e2-1284-4ff3-b252-f7de7c92a94e,size=1090] is not known in this session (StandardProcessSession[id=315712])
... View more
01-12-2019
11:18 AM
@shu Many thanks for responding--however, with this solution I will need multiple ExecuteSQL processors each specifying the database.name attribute value, correct? I'll end up with dozens of ExecuteSQL processors that have the same SQL but different Database.name. Is my understanding correct or am i missing a trick? I guess, the question is, how/where do i set the database.name attribute? I've configured the service, but how is the execute SQL processor configured?
... View more
01-11-2019
06:53 PM
Hi, I'm trying to connect to different databases that all have the same structure and execute the same query on them. However, i can't figure out how to give this to nifi execute SQL processor and i'm having to create a processor for every connection to the database. IS there a way for me to create a list of connection strings and then simply loop through them? i.e. i need to dynamically assign the connection pooling service to the processor. How?
... View more