Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2507 | 04-27-2020 03:48 AM | |
| 4975 | 04-26-2020 06:18 PM | |
| 4056 | 04-26-2020 06:05 PM | |
| 3287 | 04-13-2020 08:53 PM | |
| 5014 | 03-31-2020 02:10 AM |
02-21-2018
05:52 AM
1 Kudo
@Gopal Mehakare
you can use either of execute stream command processor which accepts incoming connections (or) Execute process which won't accept incoming connections based on your requirements to execute linux commands Execute Stream Command configs:- let's consider your input flowfile content is as follows hi
hcc
nifi output from Execute stream command processor would be just hi because we are executing bash command head -1 on the input flowfile content and output stream relation will transfer hi as new content of flowfile content. Output:- hi Execute Process configs:- The Success relation flowfile content will have just hi in it, as we are executing echo and this processor can run on own(no need of any incoming connections). Output:- hi For More reference:- https://community.hortonworks.com/questions/150122/zip-folder-using-nifi.html https://stackoverflow.com/questions/42443101/nifi-how-to-reference-a-flowfile-in-executestreamcommand
... View more
02-21-2018
07:56 AM
@Jay Kumar SenSharma Hi Jay, curl -v "http://amb25102.example.com:6188/ws/v1/timeline/metrics?metricNames=bytes_in._rate._avg&hostname=&appId=HOST&instanceId=&startTime=1451630974&endTime=1519110315" This rest api not working for For All kind of metrics:. I have replaced bytes_in._rate._avg to master.Server.numDeadRegionServers curl -v "http://amb25102.example.com:6188/ws/v1/timeline/metrics?metricNames=master.Server.numDeadRegionServers&hostname=&appId=HOST&instanceId=&startTime=1451630974&endTime=1519110315" but not able to get the metrics results .
... View more
11-12-2018
06:40 AM
@Jay Kumar SenSharma
I am also facing the same issue..however in my case i am seeing that all packagaes are installed and yum.log is clean means no errors..
ambari=> select * from host_version;
id | repo_version_id | host_id | state
----+-----------------+---------+----------------
8 | 2 | 1 | CURRENT
9 | 2 | 5 | CURRENT
13 | 2 | 3 | CURRENT
12 | 2 | 2 | CURRENT
14 | 2 | 4 | CURRENT
11 | 2 | 7 | CURRENT
10 | 2 | 6 | CURRENT
62 | 52 | 2 | INSTALL_FAILED
63 | 52 | 3 | INSTALL_FAILED
58 | 52 | 1 | INSTALL_FAILED
64 | 52 | 4 | INSTALL_FAILED
59 | 52 | 5 | INSTALL_FAILED
61 | 52 | 7 | INSTALL_FAILED
60 | 52 | 6 | INSTALL_FAILED
(14 rows)
The new target version is showing failed..which pakacges are installed on all nodes and i cannot get to upgrade prompt.
... View more
02-02-2018
11:01 PM
I still face the issue. I'm doing a non-root installation. I build the psutil as root which went fine. But when I try to restart the metrics monitor it fails.
... View more
02-01-2018
09:17 AM
ok so in order to avoid this alarm we can set "Growth Rate" from 20% to 40% , or to increase minimum capacity from 1000 to 5000 , what is the better option ?
... View more
02-13-2018
02:00 AM
After further Analysis found following things and changed , then "MR2 service check/Container killed on request Exit code is 143" went fine.
1) yarn-site.xml :-
=>Initial container not able to allocate the memory and size was yarn.scheduler.minimum-allocation-mb(178 MB) and yarn.scheduler.maximum-allocation-mb (512 MB) only.
=>Checked HDFS Block size =128 MB, as initial container not able to allocate, increased the minimum/maximum to multiple of 128 MB block size as below . => Changed the following initial container size from yarn.scheduler.minimum-allocation-mb(178 to 512 MB) and yarn.scheduler.maximum-allocation-mb (512 to 1024 MB) in yarn-site.xml.
2) mapred-site.xml:-
Once above parameter changed in yarn-site.xml, below parameter required to change in mapred-site.xml => mapreduce.task.io.sort.mb from 95 to 286 MB,mapreduce.map.memory.mb/mapreduce.reduce.memory.mb to 512 MB =>yarn.app.mapreduce.am.resource.mb from 170 to 512 MB. increase these parameter value multiple of 128 MB block size to get out of container killed error . As above we required to change parameter in yarn-site.xml, mapred-site.xml through ambari due resource constraint on existing till we get the out of error "Container killed on request. Exit code is 143. We can apply same rule to get out of below error Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
INFO mapreduce.Job: Counters: 0
... View more
04-19-2018
03:13 PM
@Ashikin Hi Ashikin, I am having the same issue as you had. How did you solve your issue? Would you please share the steps would be a great help. Thank you.
... View more
01-27-2018
08:43 PM
@Prateek Behera When exactly do you see this error ? What is the Heap Size for your NmaeNode ? NameNode heap size depends on many factors, such as the number of files, the number of blocks, and the load on the system. Can you please check the following link to know the recommended heap size for NameNode and verify your Name Noe heap settings: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_command-line-installation/content/configuring-namenode-heap-size.html
... View more
02-05-2018
06:56 AM
@Jay Kumar SenSharma : Thanks for the response. It solved the issue. I moved the lib jar to /usr/local/bin and gave ownership and permission to the metron user account. Now it worked.
... View more
01-30-2018
09:55 AM
The issue looks to be within check of kerberos tickets: HiveMetastore wasn't using them. Installed HDP2.5.3.0+ with the same configs and it worked.
... View more