Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2445 | 04-27-2020 03:48 AM | |
4880 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3219 | 04-13-2020 08:53 PM | |
4924 | 03-31-2020 02:10 AM |
02-20-2017
02:58 AM
@Aruna Sameera
As mentioned earlier as well that in order to maintain a good forum/community it is best that you ask one query per thread and mark the answer as "Accepted" when your answer is properly answered and was helpful. Keep asking different queries in a single thread and not Accepting answers that are helpful is not a good forum etiquette.
... View more
02-20-2017
02:51 AM
@Anandha L Ranganathan
Before deleting the PRESTO service have you properly stopped it? Like following: curl -u admin:xxxxxxxx -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": {"context" :"Stop PRESTO via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}' http://localhost:8080/api/v1/clusters/prod/services/PRESTO . If it is still failing with the "500 Error" then please share the ambari-server.log that will provide us the complete stackTrace of the error.
... View more
02-19-2017
09:20 AM
@rbailey
In Openstack we can use the postinstallation "cloud-init" file to setup the desired FQDN/Hostname. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/End_User_Guide/user-data.html
Like:
#cloud-config
hostname: host0141
fqdn: host0141.domain.com
ssh_pwauth: False
password: test
Are all your agent hosts having incorrect output returning for `hostname -f` and "socket.getfqdn()" (not same) ? .
... View more
02-19-2017
07:29 AM
1 Kudo
@rbailey Ambari agent will generally use the "socket.getfqdn()" approach to find the FQDN. You can also validate the output of the same python command on your problematic hosts. Example: # python
Python 2.6.6 (r266:84292, Aug 18 2016, 15:13:37)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import socket;
>>> print socket.getfqdn();
sandbox.hortonworks.com . So please check if all your hosts are returning proper FQDN? Because everytime when we start ambari-agent it gathers information (like cpu/RAM/public_host_name/host_name) about the host where it is running and then sends a registration request to the ambari-server. Also are these agents located in some cloud environment? If yes then it is possible that you might be encountering an issue that is reported in the article: https://community.hortonworks.com/content/kbentry/42872/why-ambari-host-might-have-different-public-host-n.html .
... View more
02-18-2017
02:44 PM
@Aruna Sameera Check if the service is up or not? It should open port 9000 if you have not changed the value of : <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration> - Check port. If port is not opened then start the service or check configuration if it is supposed to start on 9000 port or not? netstat -tnlpa | grep 9000 . See: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
... View more
02-18-2017
02:20 PM
@Aruna Sameera Find the process that is holding the port 50070 and then kill it. # netstat -tnlpa | grep 50070
tcp 0 0 172.17.0.2:50070 0.0.0.0:* LISTEN 29687/java
# kill -9 29687 .
... View more
02-18-2017
02:11 PM
@Aruna Sameera Regarding your latest error: Incompatible clusterIDs in /home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/datanode: namenode clusterID = CID-f597ff66-0d6a-4394-b038-02a4b51aa5be; datanode clusterID = CID-f4ecb1e1-ba90-4b03-a030-b8ef4e6b698f . Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct. So please check. cat <dfs.namenode.name.dir>/current/VERSION
cat <dfs.datanode.data.dir>/current/VERSION
Hence Copy the clusterID from namenode and put it in the VERSION file of datanode and then try again. https://community.hortonworks.com/questions/79432/datanode-goes-dows-after-few-secs-of-starting-1.html
... View more
02-18-2017
12:59 PM
@Aruna Sameera
hadoop-daemon.sh start [namenode | secondarynamenode | datanode | jobtracker | tasktracker]
.
... View more
02-18-2017
12:51 PM
2 Kudos
- We can see that ambari has it's image files and web contents present inside the "/usr/lib/ambari-server/web/" directory. This directory contains all the static stuff that are needed by the UI. . - Suppose we want to make changes to ambari UI logo. That can be accessed from the URL: http://localhost:8080/img/logo.png
http://localhost:8080/img/logo-white.png - In this example we will try changing the "logo-white.png". So in order to do that we will need to get our own logo like http://test.example.com/jboss/wp-content/uploads/2015/09/MM-Banner-logo.png Now we want to use the above image as ambari "logo-white.png". So in order to do that in need to do the following: # mkdir /tmp/images
# cd /tmp/images
# wget http://test.example.com/jboss/wp-content/uploads/2015/09/MM-Banner-logo.png
# mv MM-Banner-logo.png logo-white.png
# gzip logo-white.png - We have converted out images in compressed format. We can see the file as following, which we will need to move inside the "/usr/lib/ambari-server/web/img" directory. # ls -l /tmp/images/logo-white.png.gz
-rw-r--r-- 1 root root 41532 Nov 13 05:46 ./logo-white.png.gz
# cp /tmp/images/logo-white.png.gz /usr/lib/ambari-server/web/img/
mv: overwrite `/usr/lib/ambari-server/web/img/logo-white.png.gz'? y . Now we should be able to open the ambari UI after refreshing the browser. Refresh the browser (make sure to clear the old cache data from browser) Or open ambari UI in (Google chrome menu "File --> New Incognito Window") . Notice: The top left corner of the page that ambari UI where the logo is changed. Same way we can also make changes in the Style sheets (css) as well. . .
... View more
Labels:
02-18-2017
07:34 AM
@Aruna Sameera Try these steps:
Stop "SecondaryNameNode" then start it again.
1. On Active NameNode service execute following commands: # su hdfs
# hdfs dfsadmin -safemode enter
# hdfs dfsadmin -saveNamespace
2. Stop HDFS. Keep the Journal Nodes running. 3. Take a backup of the "Data directory of the NameNode".
For example if the NameNode data directory is - "/hadoop/hdfs/namenode", and the backup location is "/tmp" , then following: # cp -prf /hadoop/hdfs/namenode/current /tmp
4. Run "initializeSharedEdits" to sync the edits. # hdfs namenode -initializeSharedEdits
5. Start the NameNode service, that was active last time. 6. Bootstrap Standby NameNode. This command copies the contents of the Active NameNode's metadata directories (including the namespace information and most recent checkpoint) to the Standby NameNode. # hdfs namenode -bootstrapStandby . 7. Start the Standby NameNode and the rest of HDFS. .
... View more