Member since
02-18-2016
141
Posts
19
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3651 | 12-18-2019 07:44 PM | |
3681 | 12-15-2019 07:40 PM | |
1395 | 12-03-2019 06:29 AM | |
1414 | 12-02-2019 06:47 AM | |
4239 | 11-28-2019 02:06 AM |
11-28-2019
12:47 AM
1 Kudo
Hi @laplacesdemon I agree with you and definately application/third party tools/components must be installed outside cluster or on individual new node to avoid major performance impacts. Regarding on how to manage the components if the hadoop version changes is pretty kind of devops question i feel. You always need to make some inventory of applications running along with your ecosystem components and their dependencies. Nearby you can use Nexus as centralized repository to fetch new versions which needs to be deployed on your application side[ie. Oracle Data Integrator and Jupyter Hub] with help of jenkins/some deployment tool. As per my experience i see resource related problems in case you think of installing application on edge nodes. So i will suggest that is not a good idea. Do revert if you have further points to highlight.
... View more
11-25-2019
07:54 PM
Hi @Caranthir Can you try disabling and enabling the plugin again ? While you enable the plugin it adds/modifies below properties - Can you check if those properties are set properly after you enable ranger plugin for hdfs again? Also as already mentioned by @Shelton the repository config user must be configured if you are working in kerberized environment. If still you are not able to see the repository in Ranger UI then you can click on add symbol as shown below to add repository manually - You can specify details of namenode and other and "test connection" Monitor the logs of ranger and namenode while you test connection. If connection fails you can see errors in logs. Please post for further updates.
... View more
11-25-2019
01:53 AM
1 Kudo
@Kou_Bou I suspect the file is not fully downloaded or corrupted either. You can check the size or checksum of the downloaded file. You can try downloading the file from - https://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html The size display on link is 176.92 MB jdk-8u144-linux-x64.tar.gz
... View more
11-24-2019
05:50 PM
@Kou_Bou Thank you for the detailed output. As suspected the issue is with Java. You need to try using Oracle Java and test. Also as highlighted by @Shelton i too agree to use supported version and change java as per suggested steps. Do revert if you still face issue.
... View more
11-22-2019
02:06 AM
Hi@m4x1m1li4n Can you confirm - Are the below ip's defined in /etc/hosts are public or private ipaddress? 13.48.140.49
13.48.181.38
13.48.185.39
13.53.62.160
13.48.18.0 Is the hostname defined in cloudera agent config.ini seems to be public hostname - "ec2-13-48-140-49.eu-north-1.compute.amazonaws.com" Can you try pointing both /etc/hosts and config.ini[hostname] to private IP within cluster and restart agent. I guess it might be the issue with public DNS.
... View more
11-20-2019
07:19 PM
Hi @Kou_Bou Can you please check below steps version and paste output here - Check java version $java -version $ls -ltr `which java` $rpm -wa |grep openssl Check - If multiple java version are existing then check in case if there is any conflict. You can verify this by using alternatives command - $/usr/sbin/alternatives --config java [Please do not change here. Let the default java as it is] Is the issue only with one host registering in cluster or the issue is with multiple hosts? If you see java used here is openjdk then can you try with using oracle jdk and test You can change java with below command - ambari-server setup –j <jdk path> Please do revert with Ambari server and Ambari agent version $rpm -qa |grep ambari From agent node try telnet to master at port 8440 [Check for iptables/selinux rules] $telnet <ambari-server> 8440 Pass latest ambari agent config.ini file Do revert with latest error/std log if the issue still exist
... View more
11-20-2019
01:44 AM
@Kou_Bou Can you try setting below - Set below property in ambari agent config.ini $vi /etc/ambari-agent/conf/ambari-agent.ini [Note: please add below under Security section -as below] [security]
force_https_protocol=PROTOCOL_TLSv1_2 Save and exit Restart ambari-agent Please check if that works.
... View more
11-19-2019
10:44 PM
@anshuman Currently Node labels are not yet available for FairScheduler in CDH. From the latest jira update on Sep19, upstream - https://issues.apache.org/jira/browse/YARN-2497 Node labels are still not included in CDH. However i see it might come with HDP3.x/CDH6.X version as per link - https://archive.cloudera.com/cdh6/6.0.0/docs/hadoop-3.0.0-cdh6.0.0/hadoop-yarn/hadoop-yarn-site/NodeLabel.html
... View more
11-19-2019
02:18 AM
1 Kudo
@Manoj690 Try below 2 options - 1. Check if "/var/run/ambari-metrics-collector/" directory exist and is with permission ams:hadoop If YES. Then go for option2. If Not try creating directory and check if AMS startup works. 2. Delete AMS service and its components from CLI also - rpm -qa|grep ams -Remove all components of ams And reinstall AMS. Let me know if that works. Also please share new logs in text file attach. Its easy way of formatting logs to be displayed at remote end.
... View more
11-19-2019
12:04 AM
@Manoj690 1. What is the error you are getting after following the link - https://community.cloudera.com/t5/Support-Questions/Ambari-metircs-not-started/m-p/283228#M210525%C2%A0%C2%A0 The error you pasted are you getting it while starting AMS via Ambari? Did you tried to start from CLI/backend using command "ambari-metrics-collector start"? Make sure you stop the service properly, kill if any pid exist and then start. 2. For phoenix - I see phoenix comes as part of hbase and is enabled or disabled from hbase configs as shown below. Its basically comes as part of hdp bits located in - /usr/hdp/current/phoenix-client /usr/hdp/current/phoenix-server Why do you want to completely uninstall phoenix ? Can you pass details so that we can help to understand and provide you if any workaround.
... View more