Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2093 | 06-15-2020 05:23 AM | |
| 17432 | 01-30-2020 08:04 PM | |
| 2254 | 07-07-2019 09:06 PM | |
| 8715 | 01-27-2018 10:17 PM | |
| 4913 | 12-31-2017 10:12 PM |
03-27-2018
12:37 PM
when we set new values by the config.py script script created also the file ( example - doSet_version1522153623088712.json ) is it possoible to flag the script in way to disable this file creation ? /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=set --host=master02 --cluster=hdp --config-type=spark2-thrift-sparkconf -k spark.executor.instances -v 8
ls
doSet_version1522153623088712.json
... View more
Labels:
03-25-2018
11:57 AM
yes seems you are right we already installe more then 30 clusters without problem and without Underscore
... View more
03-25-2018
11:45 AM
we installed from scratch new ambari cluster and we notice that Zkfc is fail to start as the following: what this mean - java.lang.IllegalArgumentException: Does not contain a valid host ?? /usr/hdp/2.6.0.3-8/hadoop/sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /var/log/hadoop/hdfs/hadoop-hdfs-zkfc-master01.bx_hhtyr8.com.out
Exception in thread "main" java.lang.IllegalArgumentException: Does not contain a valid host:port authority: master01.bx_hhtyr8.com:8020
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:213)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:699)
at org.apache.hadoop.hdfs.DFSUtil.getAddressesForNsIds(DFSUtil.java:667)
at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:650)
at org.apache.hadoop.hdfs.DFSUtil.getHaNnRpcAddresses(DFSUtil.java:749)
at org.apache.hadoop.hdfs.HAUtil.isHAEnabled(HAUtil.java:77)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:128)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
03-21-2018
05:26 PM
@Aditya do you think we must check that no maintenance mode in component before HDP upgrade ?
... View more
03-21-2018
04:57 PM
we have ambari version - 2.6.1.0
... View more
03-21-2018
04:57 PM
so in that case can we capture the individual hosts/components that are in MM?
... View more
03-21-2018
04:37 PM
we run the follwing curl in order to check which services/component are in maintenance mode curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/sys76/services?ServiceInfo/maintenance_state=OFF as we can see all zookeeper are in maintenance mode but the output from API still show - "maintenance_state" : "OFF" why ? - what its wrong here ? example "href" : "http://localhost:8080/api/v1/clusters/sys76/services/ZOOKEEPER",
"ServiceInfo" : {
"cluster_name" : "sys76",
"maintenance_state" : "OFF",
"service_name" : "ZOOKEEPER"
}
}
]
... View more
Labels:
03-21-2018
07:56 AM
@Jay I will open another thread because it is complicate and I will give it more simple
... View more
03-20-2018
10:59 AM
@Jay maybe we need to restart the ambari-agent after ambari upgrade ?
... View more
03-20-2018
10:59 AM
@Jay this is the head of the file <?xml version="1.0"?>
<repository-version xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="version_definition.xsd">
<release>
<type>STANDARD</type>
<stack-id>HDP-2.6</stack-id>
<version>2.6.4.0</version>
<build>91</build>
<compatible-with>2\.[3-6]\.\d+\.\d+</compatible-with>
<release-notes>http://example.com</release-notes>
<display>HDP-2.6.4.0</display>
</release>
<manifest>
... View more