Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2725 | 04-27-2020 03:48 AM | |
| 5285 | 04-26-2020 06:18 PM | |
| 4451 | 04-26-2020 06:05 PM | |
| 3576 | 04-13-2020 08:53 PM | |
| 5380 | 03-31-2020 02:10 AM |
08-28-2017
07:34 AM
1 Kudo
@Dominik Ludwig Please correct me uf my understanding regarding your query is not right. . We can register the remotecluster using ambari API as following, by passign the remote cluster credentials: # curl -H "X-Requested-By: ambari" -u admin:admin -X POST -d '{"ClusterInfo":{"name":"erie21","url":"http://mainremote.example.com:8080/api/v1/clusters/MainCluster","username":"admin","password":"admin"}}' http://standaloneambari.example.com:8080/api/v1/remoteclusters/RemoteClusterNameA . - However once the remote cluster is registered, it is not possible to see the password of remote cluster using the ambari APIs. You can only update the credentials using -X PUT method and the Put data as mentioned in the above API call. curl -H "X-Requested-By: ambari" -u admin:admin -X GET http://standaloneambari.example.com:8080/api/v1/remoteclusters/RemoteClusterNameA In the response of the above command we will not see the password only the username will be visible. .
... View more
08-28-2017
06:49 AM
@Rajesh Wonderful article. Just added code block for the commands.
... View more
08-28-2017
06:08 AM
@uri ben-ari Please try something like following: # curl -sH "X-Requested-By: ambari" -u admin:admin -i http://localhost:8080/api/v1/hosts?fields=Hosts/host_name,Hosts/ip | grep -A 1 host_name | awk '{print $NF}' > /tmp/hosts_details.txt; sed -e '1,2d' -e s'/--//g' -e 's/\n//g' -e 's/"//g' -e '/^$/d' /tmp/hosts_details.txt | awk 'NR%2{printf "%s ",$0;next;}1 . Example Output: # curl -sH "X-Requested-By: ambari" -u admin:admin -i http://amb25101.example.com:8080/api/v1/hosts?fields=Hosts/host_name,Hosts/ip | grep -A 1 host_name | awk '{print $NF}' > /tmp/hosts_details.txt; sed -e '1,2d' -e s'/--//g' -e 's/\n//g' -e 's/"//g' -e '/^$/d' /tmp/hosts_details.txt | awk 'NR%2{printf "%s ",$0;next;}1'
amb25101.example.com 172.10.116.149
amb25102.example.com 172.10.116.148
amb25103.example.com 172.10.116.151
amb25104.example.com 172.10.116.150 .
... View more
08-28-2017
05:31 AM
@uri ben-ari You can try something like following: # curl -sH "X-Requested-By: ambari" -u admin:admin -i http://localhost:8080/api/v1/hosts?fields=Hosts/host_name,Hosts/ip | grep -A 1 host_name | sed s'/"//g' | awk '{print $NF}' . Example Output: # curl -sH "X-Requested-By: ambari" -u admin:admin -i http://amb25101.example.com:8080/api/v1/hosts?fields=Hosts/host_name,Hosts/ip | grep -A 1 host_name | sed s'/"//g' | awk '{print $NF}'
http://amb25101.example.com:8080/api/v1/hosts?fields=Hosts/host_name,Hosts/ip,
[
--
amb25101.example.com,
172.10.116.149
--
amb25102.example.com,
172.10.116.148
--
amb25103.example.com,
172.10.116.151
--
amb25104.example.com,
172.10.116.150 .
... View more
08-27-2017
11:09 AM
@uri ben-ari Following should list all the components that has stale config, and need to be restaretd to get the config changes reflected. # curl -i --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/clusters/Sandbox/host_components?HostRoles/stale_configs=true
Here please replace the following: 1. "localhost" : with the Ambari FQDN 2. 8080 : with ambari port. 3. Sandbox : With cluster Name .
... View more
08-25-2017
12:56 PM
@Kasim Shaik The error is : WARN namenode.NameNode: Encountered exception during format: org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 1 exceptions thrown:
10.104.10.16:8485: Cannot create directory /home/kasim/dfs/jn/ha-cluster/current . - Please check the permission on the directory, The user who is running the NameNode format should be able to write to that directory. # ls -ld /home/kasim/dfs/
# ls -ld /home/kasim/dfs/jn
# ls -ld /home/kasim/dfs/jn/ha-cluster
# ls -ld /home/kasim/dfs/jn/ha-cluster/current
# ls -lart /home/kasim/dfs/jn/ha-cluster/current .
... View more
08-25-2017
12:27 PM
@Kasim Shaik The following error indicates that you might not have configured the FQDN properly in your cluster. java.net.UnknownHostException: master1 Can you please check if the "hostname -f" command actually returns you the same desired FQDN? Example: root@master1:~# hostname -f . https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/set_the_hostname.html Every node of your cluster should be able to resolve the nodes properly with the FQDN correctly.
... View more
08-25-2017
06:54 AM
1 Kudo
@Sheetal Sharma LongWritable is the WritableComparable for longs, Similarly IntWritable is a WritableComparable for ints. These interfaces [1] & [2] are all necessary for Hadoop/MapReduce, as the Comparable interface is used for comparing when the reducer sorts the keys, and Writable can write the result to the local disk. It does not use the java Serializable because java Serializable is too big or too heavy for hadoop, Writable can serializable the hadoop Object in a very light way. [1] https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/io/LongWritable.html [2] https://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/io/IntWritable.html#IntWritable() "Comparable" is the interface whose abstract methods give us the flexibility to compare two objects. "Writable" is meant for writing the data to local disk and it's a serialization format. One can implement own Writables in Hadoop. Java’s serialization is too bulky and slow on the system. That’s why Hadoop community had put Writable in place. "WritableComparable" is a combination of the above two interfaces. "int" is a primitive type so it cannot be used as key-value. Integer is the wrapper class around it. So I’ll correct your question that what is the difference between Integer and IntWritable? "IntWritable" is the Hadoop variant of Integer which has been optimized for serialization in the Hadoop environment. An integer would use the default Java Serialization which is very costly in Hadoop environment. .
... View more
08-24-2017
05:04 PM
@Winnie Philip Can you try moving the "/home/winnie/apps/jdk1.8.0_144/" JDK to outside the /home/winnie directory adn then try again. Example: # mkdir -p /usr/jdk64
# mv /home/winnie/apps/jdk1.8.0_144 /usr/jdk64
# chmod -R 755 /usr/jdk64//jdk1.8.0_144 . Set the JAVA_HOME at the environment level or globally and also alternatives. . Also i see there are two // slash between "jdk1.8.0_144//bin" can you please fix that as well. /home/winnie/apps/jdk1.8.0_144//bin/java -version .
... View more
08-24-2017
04:46 PM
@Aaron Dunlap This parameter "hive.server2.thrift.resultset.default.fetch.size" is added recently. As part of https://issues.apache.org/jira/browse/HIVE-14901 So i am suspecting that on the client/server side you might be using a different version of "hive-jdbc-xxx.jar" Or another Jar of different version which contains the class "org.apache.hadoop.hive.conf.HiveConf" . Can you please share more details about your environment, The Hive Version and the JARs versions that you are using?
... View more