Member since
05-15-2018
132
Posts
15
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1660 | 06-02-2020 06:22 PM | |
20216 | 06-01-2020 09:06 PM | |
2621 | 01-15-2019 08:17 PM | |
4916 | 12-21-2018 05:32 AM | |
5356 | 12-16-2018 09:39 PM |
11-22-2024
05:25 AM
1 Kudo
@Armel316 Since you only have two user group providers (ldap and file), that means that both are returning user "xxx". If the ldap-user-group-provider is returning user "xxx" you don't want to define that same user through the file-user-group-provider. What this means is that the users.xml file that the file-user-group-provider is loading users from on startup contains user "xxx". The file-user-group-provider will ONLY generate a users.xml file if one does not already exist. If one already exist the file-user-group-provider will NOT make any modifications to an existing users.xml if you modify the provider configuration. Once a users.xml file exists, the expectation is that all future user/group modification happen via the UI. NOTE: The users.xml does not contain any users or group being loaded by other providers in to NiFi memory. So you have two options here: Rename the current users.xml file so a new one is created on startup with only the 3 defined node-identities. (this is preferred method) Manually modify the users.xml to remove all users that are being synced by the ldap-user-group-provider. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-21-2024
03:53 AM
1 Kudo
@satzI just noticed that when i run the same query in hive instead of impala editor, measurement_time columns shows onl Null values. Does that mean that there files are written by Hive? I would really appreciate any further suggestions!
... View more
11-19-2024
10:30 AM
1 Kudo
@phadkev Would need to see more of the nifi-app.log to better understand what is going on here. Are you seeing the same org.apache.nifi.controller.StandardProcessorNode Timed out while waiting for OnScheduled exception for other components or just this executeScript processor? The exception itself is generic and could be thrown for any processor class. Are you ever seeing the log line telling you the NiFi UI is available at the following urls? If so NiFi is up. Are you seeing NiFi shut back down with some exception and stack trace in the nifi-app.log. What you shared implies NiFi is having issues scheduling this specific processor to execute. This could very well be caused by an issue with the custom script that was build and used in this processor. If you NiFi is really not coming up, you could modify the nifi.properties file by changing "nifi.flowcontroller.autoResumeState=true" to "nifi.flowcontroller.autoResumeState=false". This will allow you NiFi to start without starting any processors. You could then search the UI for the ExecuteScript component processor with id "acb441ba-c36b-1fdd-53f2-3a4821d43833". Disable it and start all your other processors. Restart your NiFi to see if you still have any issues. This isolates the issue to this processor and your script. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-19-2024
04:41 AM
1 Kudo
Yes, CSV Format is Custom Format
... View more
11-18-2024
09:37 PM
Hi @satz , this is my code,Here I put log for both client and session variable.The client variable returns circular json response and session variable does not return anything.so please help me on this. const hive = require('hive-driver'); const { TCLIService, TCLIService_types } = hive.thrift; const client = new hive.HiveClient( TCLIService, TCLIService_types ); client.connect( { host : "xxxxx" , port : "xxx", database: "xxxx", username :"xxxx", password: "xxxxx", }, new hive.connections.TcpConnection(), new hive.auth.NoSaslAuthentication() ).then(async client => { console.log('client',client); const session = await client.openSession({ client_protocol: TCLIService_types.TProtocolVersion.HIVE_CLI_SERVICE_PROTOCOL_V10 }); console.log('session',session); const response = await session.getInfo( TCLIService_types.TGetInfoType.CLI_DBMS_VER ); console.log(response.getValue()); await session.close(); }).catch(error => { console.log(error); });
... View more
11-15-2024
11:50 PM
1 Kudo
@mike_bronson7 these messages indicates that the Controller is being elected on other node and the current broker who is acting as a controller initiating a clean shutdown to handover the controller responsibilities to the newly elected controller. These messages are normal, but if you are facing frequent controller failures / elections / controller switches to other nodes then this could be a concern. As long as if this is happening for a genuine reason such as Cluster restart / controller broker restarts. This is valid. Also, if the Controller broker disconnects from Zookeeper and looses the Controller Znode then the other brokers participate on controller election, that would also trigger these messages
... View more
11-15-2024
11:43 PM
1 Kudo
@NagendraKumar Does there any messages found on CML's Cron logs ?
... View more
11-15-2024
11:26 PM
1 Kudo
@scoutjohn thank you for posting your query with us. What kind of encoding / serialization format does your other application uses to produce messages to kafka ? I can see https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-2-6-nar/1.27.0/org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6/index.html Message header encoding options with the "PublishKafka_2_6" processor, but not sure if it may be the option which you are looking for
... View more
11-13-2024
03:48 PM
1 Kudo
@satz The image is the result of the last check step, which is to connect to localhost:7180 and add the cluster after installing the OS using the commands on the trial install page. In this situation, the status of the cluster shows that namenode is down. After that, no matter how many times you refresh the page or restart through the command in the OS, you will get the access denied (page cannot be opened) message when you connect to the CM. Also, I install it on one OS and proceed, but sometimes, during the process of adding a cluster, only the OS installed at the time of host inspection is installed in the cluster, and the rest of the OSs are removed. (Sometimes I can completely reinstall the OSs afterward, and sometimes I can't, but I don't know the difference. The log is so long that I want to upload the file, but there is no upload function.
... View more
07-09-2024
11:26 PM
1 Kudo
Based on event log files, you need to adjust Spark History Server settings. Could you please check SHS cleanup is enabled or not. If you enable spark automatically it clean the old event log files. To load larger event log files, you need to adjust the DAEMON_MEMORY_SIZE. You can refer the following article to adjust the SHS parameters: https://spark.apache.org/docs/latest/monitoring.html#spark-history-server-configuration-options
... View more