Member since
04-05-2016
188
Posts
19
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
907 | 10-30-2017 07:05 AM | |
1194 | 10-12-2017 07:03 AM | |
4845 | 10-12-2017 06:59 AM | |
7112 | 03-01-2017 09:56 AM | |
21281 | 01-26-2017 11:52 AM |
07-31-2024
03:23 AM
1 Kudo
@Adyant001, Welcome to the Cloudera Community. As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-16-2022
01:41 AM
Thinks for all your suggestion. By increasing HEAP size issue resloved. Thanks, Suryakant
... View more
10-07-2020
05:07 AM
Hello Shishir, Would you mind to please how do we migrate a standalone Nifi setup to cluster mode? Thanks snm1523
... View more
07-23-2020
07:39 AM
1 Kudo
Check for more details and got error as : "Unexpected end of input stream" Now, Get the hdfs LOCATION for the table by using below command on HUE or HIVE shell: show create table <table-name>; Check for the zero byte size files and remove them from hdfs location using below command: hdfs dfs -rm -skipTrash $(hdfs dfs -ls -R <hdfs_location> | grep -v "^d" | awk '{if ($5 == 0) print $8}') Try running again our query which ran successfully this time.
... View more
09-12-2018
01:13 PM
@rabbit s Reducing the memory specs for the spark executors will reduce the total memory consumed which should eventually allow for more jobs (new threads) to be spun...
... View more
08-08-2018
07:29 AM
Thanks @Bryan Bende. Will try and revert back...
... View more
04-23-2018
08:13 PM
1 Kudo
@Joshua Adeleke - Just want to add some details here for others who may be seeing same behavior. - There are two process that are part of a running NiFi. When you start NiFi, you are really starting the NiFi bootstrap process. (This is process that Ambari in HDF monitors). The bootstrap process is responsible for kicking off a second java process that runs the main application. This second process may take several minutes to completely start on every node in a NIFi cluster. - As each node starts it will reach a point where the cluster will be formed. Each Node will communicate with zookeeper looking to see if the cluster already has an elected flow, Cluster coordinator and primary node. If none exist an election begins. The election is held for 5 minutes (default) or until all nodes have connected (based on configured number of election candidates. HDF sets this for you in the nifi.properties file but otherwise it is blank.). - Once a flow, cluster coordinator, and primary node have been chosen, the election is over. The UI will now become available. At the same time, nodes will send heartbeats to the elected cluster coordinator to join the cluster. Once all nodes have joined the UI will reflect all nodes as connected. - ***NOTE: Very large queue backlogs in your flow can extend the length of time it takes for a NiFi node to come up and join in to the cluster. Since NiFi must load parse all that FlowFile information form the FlowFile repository and load it back in to the designated queues in your dataflows. - Thanks, Matt
... View more
02-27-2018
03:19 PM
2 Kudos
Prakash Punj Audit source from HDFS is not supported at Ranger end. However, you can store audits in HDFS through plugins. so if you want to get audits on ranger UI, you need to change the audit source to solr and store the audits to solr.
... View more
11-10-2017
09:13 AM
1 Kudo
If your text always starts,ends with a ", then you can probably use below transformations: text.map(lambda x:(1,x)).reduceByKey(lambda x,y:' '.join([x,y])).map(lambda x:x[1][1:-2]).flatMap(lambda x:x.split('" "')).collect() where text represents an object that reads below lines "The csv file is about to be loaded into Phoenix" "another line to parse" like: ['"The csv','file is about','to be loaded into','Phoenix",'"another line','to parse"'] While loading lines are split on a \n. This reduces them once again to a single line and splits on " ", so you get a list with portions between successive ".
... View more