Member since
02-07-2019
2690
Posts
235
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1096 | 04-15-2025 10:34 PM | |
3279 | 10-28-2024 12:37 AM | |
1418 | 09-04-2024 07:38 AM | |
3244 | 06-10-2024 10:24 PM | |
1383 | 02-01-2024 10:51 PM |
12-11-2024
08:59 PM
2 Kudos
Hi Samsal, Firstly I want to thank you for taking your time in solving my query. The solution you provided worked like a magic. Secondly, yes I am new to this platform and also for JOLT, moving forward I will follow your tips and suggestions and will go through the courses which you've shared. Once again thank you for your valuable assistance. It made a significant difference. I am grateful.
... View more
12-11-2024
06:33 AM
Hello, These are NOT ERRORS: INFO conf.Configuration: resource-types.xml not found.
INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. As for this: INFO mapreduce.Job: map 0% reduce 0% How many mappers were specified for the IMPORT? Try locating the running containers in YARN and take a few JSTACKs to find out if the mapper is stuck waiting from your source database, if so make sure there are no firewall/network rules preventing the flow of data. Are you able to execute SQOOP EVAL on the source DB? If so, try using options: -jt local
-m 1
--verbose If the job completes, that would confirm a communication issue from your NodeManagers to the source DB
... View more
12-11-2024
01:16 AM
1 Kudo
@VidyaSargur it somewhat helped. It was failing because we had an NFS client running on that server. Since we have a customer-facing client -> server architecture for NFS, we could not start the HDFS NFS Gateway again on the same port. So, the only solution was to stop the HDFS NFS Gateway.
... View more
12-05-2024
03:08 AM
1 Kudo
@tono425, Thank you for your participation in the Cloudera Community. I'm happy to see you resolved your issue. Please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
12-03-2024
04:36 AM
1 Kudo
Hi @Mikhai , Its hard to say what is going on without looking at the data itself or seeing the ExcelReader Configuration. I know providing the data is not easy but if you can replicate the issue using dummy data then please share. Also if you can provide more details on how you configured the ExcelReader, for example are you using custom schema or infering the schema? I would try the following: 1- Try to find table boundary in excel and delete empty rows. If you cant then for sake of testing copy the table with the rows you need into new excel and see if that works. 2- If ExcelReader works with 545 rows , then I will try and provide custom schema - if not provided - and try to set some of the fields where there should be a value to not allow null. Maybe by doing so it will help the ExcelReader not to import empty rows. I tried to use ExcelReader before but ran into issues when the excel has some formula columns because of a bug in the reader itself. Im not sure if those issues were addressed but as workaround I used Python Extension to develop custom processor that takes excel input and convert into Json using Pandas library. This might be an option to consider if you are still having problems with the ExcelReader service but you have to use Nifi 2.0 version in order to use python extension. If that helps please accept the solution, Thanks
... View more
11-25-2024
01:13 AM
1 Kudo
@Pravakar, Welcome to our community! To help you get the best possible answer, I have tagged in our NiFi experts @SAMSAL @MattWho who may be able to assist you further. Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.
... View more
11-22-2024
05:25 AM
1 Kudo
@Armel316 Since you only have two user group providers (ldap and file), that means that both are returning user "xxx". If the ldap-user-group-provider is returning user "xxx" you don't want to define that same user through the file-user-group-provider. What this means is that the users.xml file that the file-user-group-provider is loading users from on startup contains user "xxx". The file-user-group-provider will ONLY generate a users.xml file if one does not already exist. If one already exist the file-user-group-provider will NOT make any modifications to an existing users.xml if you modify the provider configuration. Once a users.xml file exists, the expectation is that all future user/group modification happen via the UI. NOTE: The users.xml does not contain any users or group being loaded by other providers in to NiFi memory. So you have two options here: Rename the current users.xml file so a new one is created on startup with only the 3 defined node-identities. (this is preferred method) Manually modify the users.xml to remove all users that are being synced by the ldap-user-group-provider. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-21-2024
10:29 PM
Hi @xiaohai >What is the error you are seeing? >can you use this delimiter? impala-shell -B --output_delimiter='|' -q 'SELECT * FROM your_table' Regards, Chethan YM
... View more
11-21-2024
12:02 AM
1 Kudo
Thank you @rki_ ! That is absolutely what happened. I had a node that the /tmp/ folder still contained old journalnode data. After cleaning it up and doing initializeSharedEdits i managed to start cluster. Note: I had this exact exception on two slave nodes: WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage java.io.IOException: There appears to be a gap in the edit log. We expected txid 121994, but got txid 121998. I did hdfs namenode -recover on both slave nodes and then was able to start both namenodes propely. The data is replicated within all 3 nodes. Thank you so much for the help!
... View more
11-20-2024
11:31 PM
1 Kudo
@sde_20241, Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future. However, if you still have concerns, please provide the information that @Asfahan has requested.
... View more