Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2730 | 04-27-2020 03:48 AM | |
| 5288 | 04-26-2020 06:18 PM | |
| 4458 | 04-26-2020 06:05 PM | |
| 3584 | 04-13-2020 08:53 PM | |
| 5385 | 03-31-2020 02:10 AM |
03-08-2018
11:18 AM
@Vishal Gupta 1. Can you please share the exact set of tags where you made the port changes inside the "server.xml" 2. On the tomcat host are you able to see the port is listening and bound properly? # netstat -tnlpa | grep 12123
# netstat -tnlpa | grep 2222 3. Do you see any error inside the "catalina.out" ? 4. Is there any restriction at firewall level which is restricting the port 12123 access? Can you check the firewall rules (may be it is set in such a way that the port 2222 only will be accessible).
... View more
03-08-2018
09:30 AM
1 Kudo
@Leonardo Apolonio You might be using Ambari 2.6.1.3 The "HostCleanup.py" script is moved to the following location: # ls -l /usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py
-rwxr-xr-x. 1 root root 22464 Feb 6 13:03 /usr/lib/ambari-agent/lib/ambari_agent/HostCleanup.py . For the change details review please refer to: https://issues.apache.org/jira/browse/AMBARI-22830 Changes can be seen here: https://github.com/apache/ambari/pull/229/files
... View more
03-08-2018
09:22 AM
@Soungno Kim This looks like a duplicate thread of: https://community.hortonworks.com/questions/176319/hiveserver2-does-not-start-after-installing-hdp-26-1.html?childToView=177172#answer-177172 Please close one.
... View more
03-08-2018
09:19 AM
@Soungno Kim Looks like the cause of failure is the following error. Diagnostics: ExitCodeException exitCode=2: tar: Removing leading `/' from member names
tar: Skipping to next header
gzip: /hadoopfs/fs1/yarn/nodemanager/filecache/60_tmp/tmp_tez.tar.gz: invalid compressed data--format violated Can you please check if you are able to untar that file manually? (this is to verify if the tar archive is actually corrupted or not?) I am suspecting that the archive "/hadoopfs/fs1/yarn/nodemanager/filecache/60_tmp/tmp_tez.tar.gz" is corrupted. So can you please try clearing the filecache dir and then try again. .
... View more
03-08-2018
06:07 AM
@Bhushan Kandalkar You might be hitting the issue described in the article: https://community.hortonworks.com/content/supportkb/150303/after-enabling-ssl-for-yarn-and-hdfsthe-nodes-for.html . So please add the respective certificate in the Ambari server Truststore.
... View more
03-08-2018
01:09 AM
2 Kudos
@Marcelo Dotti The problem i see here is that you are not accessing the ambari UI directly. I see a port "https://........:443" in your URL which indicates whether your ambari is SSL enabled and listening on port 443. (but i do not think that port 443 and SSL is enabled on your ambari), it may be something else like "nginx" proxy running on port 443 before ambari which is listening clients request on secure port 443 which is not allowing uploading the large data using PUT method. Can you please confirm your ambari version and if there is any proxy in between. .
... View more
03-08-2018
12:09 AM
@Marcelo Dotti 1 MB is too small ... i have put more than 100 MB of file to HDFS using Ambari File view just now and it works well. Can you please try the following: 1. Try putting the file to some other directory? 2. Please check if the 413 Request Entity Too Large is coming from some other source ... Like Do you have a router or as webserver present in front of ambari or are you directly accessing the ambari UI without any proxy? .
... View more
03-07-2018
09:04 AM
@Ben Liu If the cluster you are trying to connect is secure (means Kerberos Enabled) then you can refer to the following article to know about how to connect to it : https://community.hortonworks.com/articles/56702/a-secure-hdfs-client-example.html . If the cluster that you are trying to connect to , is not secure then you might be using incorrect (core-site.xml / hdfs-site.xml) or incorrect property like "hadoop.security.authentication" which might be creating confusion here. Please check the classpath resources/configs.
... View more
03-07-2018
08:59 AM
@Ben Liu Please check the "core-site.xml" that you are using in the classpath of your MapReduce code and grep for the property "hadoop.security.authentication" if you see that it is "kerberos" then it means you are trying to access a secure cluster So in that case you should have a valid kerberos ticket. Please see: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SecureMode.html The value of this property decides how will it behave. "simple" : No authentication. (default) "kerberos" : Enable authentication by Kerberos. .
... View more
03-07-2018
05:31 AM
@Aymen Rahal I will suggest that you try to start those services one by one. Like try starting HDFS services first then try YARN services. One by one starting services will help in finding where we meet the resource limitation. Also we can keep checking the logs to find out what is going wrong if the service startup fails. Other services which you do not want to use immediately (might have installed just for testing), you can put them in "Maintenance Mode" So that you won't see the alerts. .
... View more