Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2717 | 04-27-2020 03:48 AM | |
| 5277 | 04-26-2020 06:18 PM | |
| 4443 | 04-26-2020 06:05 PM | |
| 3567 | 04-13-2020 08:53 PM | |
| 5376 | 03-31-2020 02:10 AM |
06-15-2017
05:15 PM
@Sami Ahmad
Which exact version of ambari are you using? I do not see any matching line at 111 on that script of Ambari 2.2.2 / 2.4.0/ 2.4.2/ 2.5.0 versions of API. https://github.com/apache/ambari/blob/release-2.5.0/ambari-agent/src/main/python/ambari_agent/HostCleanup.py https://github.com/apache/ambari/blob/release-2.4.2/ambari-agent/src/main/python/ambari_agent/HostCleanup.py https://github.com/apache/ambari/blob/release-2.2.2/ambari-agent/src/main/python/ambari_agent/HostCleanup.py - I see line 111 is empty. (So i am suspecting that the script that you are using might be slightly changed/edited). So by any chance have you made any modification to this script? # /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py . - What is your python version? - If you continue to face this issue then it will be quick to "Uninstall ambari-agent" and then install it back after manually cleaning up the agent directories. # yum remove ambari-agent
# yum install ambari-agent
.
... View more
06-15-2017
02:24 PM
@timc c Have you changed the "hadoop.proxyuser.knox.groups" or "hadoop.proxyuser.knox.hosts" to access hive2 through knox ? Either set the values for these properties to * or the user which is failing. If yes, then you will have to restart the Hive. Restart the Hive service, so that it operates using these updated core-site configurations. You are uisng HDP 2.6 but for Old HDP Spack (2.5) there was a similar issue reported as known issue. Please see: BUG-66998 in https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-release-notes/content/ambari_relnotes-2.5.0.3-known-issues.html
... View more
06-15-2017
01:23 PM
1 Kudo
@Sebastien Chausson
Please change the "timeline.metrics.service.webapp.address" value to "0.0.0.0:6188" looks like you might be incorrectly having it as "0.0.0.0::host_group_1%:6188" somewhere.
As i see the following error: Caused by: java.lang.IllegalArgumentException: Malformed escape pair at index 28: http://0.0.0.0::host_group_1%:6188:0
at java.net.URI.create(URI.java:852)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:297)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:395)
at
org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer.startWebApp(ApplicationHistoryServer.java:180)
... 4 more
Caused by: java.net.URISyntaxException: Malformed escape pair at index 28: http://0.0.0.0::host_group_1%:6188:0
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.scanEscape(URI.java:2978)
at java.net.URI$Parser.scan(URI.java:3001)
at java.net.URI$Parser.parseAuthority(URI.java:3142)
at java.net.URI$Parser.parseHierarchical(URI.java:3097)
at java.net.URI$Parser.parse(URI.java:3053)
at java.net.URI.<init>(URI.java:588)
at java.net.URI.create(URI.java:850) .
... View more
06-15-2017
10:56 AM
1 Kudo
@haitham sefi I am sure that you might have gone through the following links, If not then it might be useful to clear some points.
https://hortonworks.com/products/data-center/hdf/ https://hortonworks.com/webinar/introducing-hortonworks-dataflow/ https://community.hortonworks.com/questions/80005/confusion-with-hdf-and-hdp-and-their-capabilities.html .
... View more
06-15-2017
09:26 AM
@Sebastien Chausson If the AMS collector is going down continuously then please check the following logs for any error or warnings: # less /var/log/ambari-metrics-collector/ambari-metrics-collector.log
# less /var/log/ambari-metrics-collector/ambari-metrics-collector.out . In case of default Embedded Mode of AMS you do not need to start the HBase separately , it will start HBase instance on its own. Example: Notice "HMaster" # ps -ef | grep ^ams
ams 29300 29286 7 Jun14 ? 00:50:46 /usr/jdk64/jdk1.8.0_112/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/ambari-metrics-collector/hs_err_pid%p.log -Djava.io.tmpdir=/var/lib/ambari-metrics-collector/hbase-tmp -Djava.library.path=/usr/lib/ams-hbase/lib/hadoop-native/ -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/ambari-metrics-collector/gc.log-201706142141 -Xms1536m -Xmx1536m -Xmn256m -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Dhbase.log.dir=/var/log/ambari-metrics-collector -Dhbase.log.file=hbase-ams-master-kamb25103.example.com.log -Dhbase.home.dir=/usr/lib/ams-hbase/ -Dhbase.id.str=ams -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start
. In case of AMS you can check the following properties to know the AMS is writing the data. By default in the Embedded mode you will find the following values for the mentioned properties: hbase.rootdir = file:///var/lib/ambari-metrics-collector/hbase . NOTE: Ambari Metrics service uses HBase as default storage backend. Set the hbase.rootdir for HBase to either local filesystem path if using Ambari Metrics in embedded mode or to a HDFS dir, example: hdfs://namenode.example.org:9000/hbase. By default HBase writes into /tmp. Change this configuration else all data will be lost on machine restart. . . As it is sandbox instance you as a quick attempt you can try cleaning up the AMS data as described in the following doc and then try to restart AMS: https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data
... View more
06-15-2017
08:42 AM
@Sebastien Chausson By default AMS will use Embedded HBase so it is not dependent on the external HBase. You can double check the same by checking the "ams-site" following setting. Please see: https://cwiki.apache.org/confluence/display/AMBARI/AMS+-+distributed+mode timeline.metrics.service.operation.mode = embedded . If the AMS is going down again and again then one reason could be limited availability of resources (RAM). So can you please check if you have enough free memory available on your sandbox? . For testing you can stop the unwanted services for testing like Oozie/Hive and then see if the AMS continues to run.
... View more
06-13-2017
01:24 PM
@Vinuraj M
Your issue looks related to : https://community.hortonworks.com/questions/103489/after-upgrading-ambari-from-242-to-250-not-able-to.html
... View more
06-13-2017
01:22 PM
@Vinuraj M
Please try the following: Step1). Take ambari server DB dump. (As we are going to modify the ambari DB entries), You might find that the mentioned resource is Zeppelin most probably. Because Zeppelin View is removed from Ambari 2.5 onwards.
select * from adminprivilege where resource_id = 10;
select * from adminresource where resource_id = 10;
select * from adminresourcetype where resource_type_id IN (select resource_type_id from adminresource where resource_id = 10);
Step2). Delete the entries with key 10 SELECT adminresource.resource_id FROM adminresource LEFT OUTER JOIN viewinstance USING (resource_id) WHERE adminresource.resource_type_id = 10 AND viewinstance.view_instance_id IS NULL;
DELETE FROM adminprivilege where resource_id in (10);
DELETE FROM adminresource where resource_id in (10);
Step3). Now restart ambari server. .
... View more
06-13-2017
12:40 PM
1 Kudo
@Vinuraj M After restart ambari server put the ambari-server.log in tail and then try to login from ambari UI as "admin" user and then see if you get any error. Please share the log in case if you find any error/warning. # tail -f /var/log/ambari-server/ambari-server.log .
... View more
06-12-2017
07:00 PM
@Sami Ahmad
You are getting Http Response 504 http://hadoop1.tolls.dot.state.fl.us/AMBARI-2.4.2.0/centos6/2.4.2.0-136/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 504 Gateway Timeout"
According to Http RFC specification : https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.5 10.5.5 504 Gateway Timeout
-------------------------------
The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server specified by the URI (e.g. HTTP, FTP, LDAP) or some other auxiliary server (e.g. DNS) it needed to access in attempting to complete the request.
Note: Note to implementors: some deployed proxies are known to return 400 or 500 when DNS lookups time out. .
... View more