Member since
08-28-2017
129
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1452 | 07-08-2021 03:56 AM | |
4428 | 07-20-2020 06:54 PM | |
2055 | 06-03-2020 06:53 PM | |
1181 | 05-28-2020 01:38 AM | |
2377 | 05-26-2020 01:26 AM |
07-28-2023
12:23 PM
@Phil_I_AM Always best tot start a new question rather than commenting on an old post. Will get you better traction that way from the community. The reason NiFi does not have native processor that can handle rar files is because there does not appear to be java native libraries available to do this. https://issues.apache.org/jira/browse/NIFI-8391 You may consider filtering and writing your rar files to a flow specific directory on disk and then use the ExecuteStreamCommand processor to unpack that rar in that directory. You could have a new flow that uses the listFile (configured to ignore files with rar extension) and FetchFile processor to monitor that directory for new files and consume them for further processing. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
07-12-2021
01:31 PM
Hy! Thanks for the answer! My old nifi wasn't secured and didn't have users. In my new installation we have some users, in this error after inserting and configuring acess policies my error was resolved. Thank you!
... View more
05-26-2021
12:27 PM
Hi @dyadav1 Thanks for pointing out the direction for Kafka topic but I need little more help to determine the strategy, as I mentioned if every topic is putting 100 different types of command and I need to transform each type of command into new type (assuming Jolt transformation and have jolt spec for 100 different command types), how do I store those spec and make generalize flow so that new command types get added I just need to update Spec and flow didn;t require any changes. In more terms need to dynamically figure out which command type coming in the flow file and transform it based on spec, now I feel I need to use lookup and if I store all spec in database but then for each transformation I need to do database trip which I want to avoid.
... View more
10-09-2020
07:16 AM
1 Kudo
https://www.datainmotion.dev/2020/09/devops-working-with-parameter-contexts.html download the flow/backup up/store in git copy a flow to archive remove from production https://www.datainmotion.dev/2019/11/nifi-toolkit-cli-for-nifi-110.html
... View more
09-25-2020
12:17 PM
Hi Steven, I used @bingo 's solution to get Nifi to find my JAVA_HOME. But you mention that Nifi does not need this to run. Do you know what is the impact for running nifi without it knowing where Java is installed?
... View more
09-13-2020
09:08 AM
thank you for the post but another question - according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html The last stage is talking about – in case NameNode HA enabled , then need to run the following command on one of the name node hdfs zkfc -formatZK -force thank you for the post but since we have active name node and standby name node we assume that our namenode is HA enable example from our cluster but we want to understand what are the risks when doing the following cli on one of the namenode hdfs zkfc -formatZK -force is the below command is safety to run without risks ?
... View more
07-29-2020
10:49 AM
Ambari-metrics-monitor is running on these 2 of 26 hosts also. Only issue I saw in logs of 1 of the 2 hosts I checked: Jul 29, 2020 10:31:28 AM java.util.logging.LogManager$RootLogger log SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties
... View more
07-24-2020
03:32 AM
Policy is synced to all the nodes. You can check that in Ranger->Audit->Plugins. If not, then you should check if you have access policy for node identities,
... View more
07-15-2020
05:15 AM
1 Kudo
You can try to use this command : http://<ambari-server>:8080/api/v1/stacks/{stackName}/versions/{stackVersion}/services To get help on API calls use this : http://<ambari-server>:8080/api-docs. You can try out api calls to understand what api returns.
... View more
06-19-2020
07:48 AM
Hi. Tis hadoop cluster hacer 1.4PB size, so for this node we have this situation size on the Mount points: [root@ithbda108 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/md2 459G 50G 385G 12% / tmpfs 126G 36K 126G 1% /dev/shm /dev/md0 453M 77M 349M 19% /boot /dev/sda4 6.6T 6.3T 323G 96% /u01 /dev/sdb4 6.6T 6.3T 321G 96% /u02 /dev/sdc1 7.1T 6.8T 314G 96% /u03 /dev/sdd1 7.1T 6.8T 314G 96% /u04 /dev/sde1 7.1T 6.8T 318G 96% /u05 /dev/sdf1 7.1T 6.8T 323G 96% /u06 /dev/sdg1 7.1T 6.8T 325G 96% /u07 /dev/sdh1 7.1T 6.8T 323G 96% /u08 /dev/sdi1 7.1T 6.8T 324G 96% /u09 /dev/sdj1 7.1T 6.8T 324G 96% /u10 /dev/sdk1 7.1T 6.8T 324G 96% /u11 /dev/sdl1 7.1T 6.8T 322G 96% /u12 cm_processes 126G 200M 126G 1% /var/run/cloudera-scm-agent/process ithbda103.sopbda.telcel.com:/opt/exportdir 459G 338G 98G 78% /opt/shareddir I suppose that it can be an issue about space disk and there is no space left on the device at the time of writing into log4j. Any idea in what action can we do to solve the space left on the mount points? Some cloudera procedure to optimize that the process can be up?
... View more