Member since
03-29-2018
41
Posts
4
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11309 | 01-24-2018 10:43 AM | |
2724 | 11-17-2017 02:41 PM |
06-27-2021
01:11 AM
@pvishnu 1) "Next" button is enabled and I can select on it. It's just that I want to install NIFI service on new host and the option is greyed out. 2) I haven't checked the logs 3) I have already done that and it didn't worked.
... View more
06-26-2021
10:33 PM
I am using "admin" account to add a new service using Ambari UI. Everything is greyed out. Anyone know why? And how to solve this issue?
... View more
Labels:
- Labels:
-
Apache Ambari
03-30-2018
02:55 PM
@Aishwarya Sudhakar Please can you share the command with us?
... View more
03-19-2018
02:05 PM
@Vinay K Thanks for confirming the time sync. Probable cause could be from your firewall blocking the TCP communication on network or other process on host using the hadoop designated ports. Reason for the justification: In the first log you shared with us has the same problem of RPC connection and due to which it was not able to roll the edit logs. Now, you have shared logs of namenode and zoo keeper in which it clearly shows the error message "connection refused". For this type of problem even if you increase your time out to unlimited it is not going to help you. Check for the below things for me: a) Check if firewall/iptables are turned off. If not then do turn it off as per the command shared : systemctl stop firewalld b) Check if you are able to ping to the slave nodes from master node using ping command: ping <ip_address> c) Check if you host is able to resolve the hostname and ip address of the slave nodes(Ping using hostnames of slaves): Run these commands from: ANN -> JN ANN -> ZK ANN -> SNN ping <hostname>
dig -x <ip_address_of_slave> d) Did you check for the port opened and not being used by any other process? Use the command as shared in my previous post.
... View more
03-19-2018
11:11 AM
@Vinay K Problem: There are two problem as per the logs shared : 1) Active NameNode is rolls the edit logs in it's local disk and by using RPC call it send that to journal node. ANN using flush function like the Native call happens in Unix/Linux write to journal nodes. So, here the fatal error shown in log file could be due to: a) As per log problem is with only one journal node i.e. (10.10.20.5). Might be Journal node process is not running on host. You can check them using : ps -eaf| grep journal b) RPC ports not actively listening on journal node, check them using below command: netstat -ntlp |grep 8485 c) Stop firewalld/Iptable services on the ANN(Active namenode), also on Journal node. to make sure these are not blocking the RPC call. systemctl stop firewalld d) Another probable cause could be your disk is heavily busy on that specific JN which is resulting in time out. Check that using iostat command in linux/unix. iostat And check for the disk i/o where your edit logs are being saved. 2) The second error is due to the problem with this journal node (10.10.20.5). Once you rectify the problem with this journal node I think you will be sorted. Also, one more thing to add please check if the time on all journal node are same and in sync. If you have NTP service running on your server please check if NTP server is picking up the right time. You can check the time on these node using date command: date
... View more
02-21-2018
10:10 PM
@Jay Kumar SenSharma Any specific reason for lots of symlinks present for binaries in conf and etc dir. It's quite confusing sometime. Do you why it's been designed this way?
... View more
02-05-2018
12:16 PM
@Mohan V Try making below changes as per the link provided: listen 0.0.0.0:80
listen 0.0.0.0:8080 Here is the link of document which could be helpful: http://httpd.apache.org/docs/2.2/bind.html
... View more
01-24-2018
10:43 AM
@Mudassar Hussain If you are running a hadoop cluster in your AWS EC-2 instance and trying to create a file or folder then below are the command to achieve the same: su - hdfs
hdfs dfs -mkdir /Mark
hdfs dfs -mkdir /Mark/Cards
Now to create a file in the specified folder you have to use touch command: hdfs dfs -touchz /Mark/Cards/largedeck.txt Here are some links which will help you to understand the same: Apache Link for shell commands HCC - Link HDP-DocsLink
... View more
01-10-2018
10:17 AM
@Muthukumar S As suggested by Geoffery enable your setting for ACL to make it working. Furthermore,this is the link from HDP site regarding ACLs. It covers all basics how to enable, setup and check ACLs on Hadoop.
... View more
11-29-2017
02:55 PM
1 Kudo
@Prabin Silwal
in Linux every process creates a file descriptor entry into your process table for every opened files or Input/Output. So, for every user in a linux/unix system has certain limit for open file descriptors which is normally set as 10ex24. In case if a user process tries to exceed the defined FD's it fails with this type of error. To mitigate this issue you can increase the FD for the user executing NIFI process on your OS. To achieve this you have to use "ulimit" command. To check open FD for a user: Login with the account with which your NIFI process normally runs. Run the below mentioned command to check the limit: # To check the FD limit for specific user only
ulimit -n
#another way of checking the limit
#This is system wide
$ cat /proc/sys/fs/file-max
Or you may do this to check the number of open FD currently by the process, use below steps:
Find out the process id on linux machine for NIFI process. You may use "ps -eaf and grep {Pattern}" and the cmd for grep pattern. Once you have the process id, run below command to take the count of open FD's #This is for specific user only
$ cd /proc/{Process_ID}/fd;ls -l|wc -l
#To find out how many of the available file descriptors are being currently used, run the following command:
#This is system wide
$ cat /proc/sys/fs/file-nr So, using above to methods you may check the limit and currently open FD by specific process. Once you have the count during the busiest run you will come to know the limit to which you must adjust your FD for that process. How do I change FD for user: # Open limits.conf file and make below change
$ vi /etc/security/limits.conf
## Example hard limit for max opened files
# Example line is below for nifi_user where we are setting hard and soft limit to 50000 for max opened files
nifi_user hard nofile 50000
nifi_user soft nofile 50000
... View more