Member since
03-07-2019
158
Posts
53
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6396 | 03-08-2019 08:46 AM | |
4361 | 10-17-2018 10:25 AM | |
2782 | 10-16-2018 07:46 AM | |
2126 | 10-16-2018 06:57 AM | |
1773 | 10-12-2018 09:55 AM |
08-28-2018
01:20 PM
2 Kudos
Hi @owen
chaos
As Jay mentioned, this is supported in Ambari 2.7 with HDP 3
Have a look at the following document, from page 34 onwards,
The HDFS NameNode High Availability feature enables you to run redundant NameNodes in the same cluster in an Active/Passive configuration with a hot standby. This eliminates the NameNode as a potential single point of failure (SPOF) in an HDFS cluster. As of Hadoop 3.0, you can configure more than one backup NameNode.
...
This guide provides an overview of the HDFS NameNode High Availability (HA) feature, instructions on how to deploy Hue with an HA cluster, and instructions on how to enable HA on top of an existing HDP cluster using the Quorum Journal Manager (QJM) and ZooKeeper Failover Controller for configuration and management. Using the QJM and ZooKeeper Failover Controller enables the sharing of edit logs between the Active and Standby NameNodes.
mainly, this is where you setup multiple NN's in ambari 2.7 with hdp3;
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2,nn3</value>
<description>Unique identifiers for each NameNode in the
nameservice</description>
</property><br>
Also, note; the minimum number of NameNodes for HA is two, but you can configure more. You should not exceed
five NameNodes due to communication overhead. Three NameNodes are recommended.
... View more
08-28-2018
10:58 AM
2 Kudos
Hi @subhash parise What did you set the ownership to for the version-2 folder and contents? It should be zookeeper:hadoop and zookeeper (owner) should have write permissions. Did you also check that you can traverse the folder structure, as the zookeeper user? Ex does this work; [root@host]# su - zookeeper
[zookeeper@host ~]$ cd /data/hadoop/zookeeper/version-2/ [zookeeper@host version-2]$ ls -al
drwxr-xr-x. 2 zookeeper hadoop 4096 Aug 27 08:03 .
drwxr-xr-x. 3 zookeeper hadoop 4096 Aug 27 08:03 ..
-rw-r--r--. 1 zookeeper hadoop 1 Aug 27 08:03 acceptedEpoch
-rw-r--r--. 1 zookeeper hadoop 1 Aug 27 08:03 currentEpoch
-rw-r--r--. 1 zookeeper hadoop 67108880 Aug 28 10:52 log.100000001
-rw-r--r--. 1 zookeeper hadoop 296 Aug 27 08:03 snapshot.0
... View more
08-28-2018
09:04 AM
1 Kudo
Hi @Taehyeon
Lee From your error; Failed to download file from http://master:8080/resources/mysql-connector-java.jar due to HTTP error: HTTP Error 404: Not Found Try this on the ambari server host; sudo yum install mysql-connector-java*
ls -al /usr/share/java/mysql-connector-java.jar
cd /var/lib/ambari-server/resources/
ln -s /usr/share/java/mysql-connector-java.jar mysql-connector-java.jar
... View more
08-24-2018
06:57 AM
Hi @zkfs There isn't one. For the above example, you will notice an entry for a non mapreduce job in the namenode log similar to this example; hadoop-hdfs-namenode.log:2018-08-24 06:44:41,819 INFO hdfs.StateChange (FSNamesystem.java:completeFile(3759)) - DIR* completeFile: /user/hadoop/file1.dat._COPYING_ is closed by DFSClient_NONMAPREDUCE_956954044_1 What happens is; the client used the create() operation defined in the DistributedFileSystem class, and then makes use of the DFSOutputStream class to write to the an internal queue, called the 'data queue' which is used by the datastreamer, which in turn will allocate blocks for the data that we want to write with the copyfromlocal command. There is no mapreduce/yarn job here, which you can notice from the NONMAPREDUCE entry in the namenode log. For some other tools, such as distcp, you would see mapreduce involved.
... View more
08-22-2018
03:50 PM
Awesome, glad to hear that it's working for you now 🙂
... View more
08-22-2018
02:42 PM
1 Kudo
Is TEZ installed on your cluster? From the "Upgrading Timeline Server 1.0 to 1.5" documentation; If you have Tez enabled, the tez-client must be installed on the ATS server. You must also perform this additional step:
<property>
<name>yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes</name>
<value>org.apache.tez.dag.history.logging.ats.TimelineCachePluginImpl</value>
</property> Install the TEZ client on the ATS Server if you haven't got this already. If you already have the TEZ client installed on the ATS server, we may have to try a reinstall of the TEZ client on that host.
... View more
08-21-2018
09:41 AM
1 Kudo
Hi @rabbit s The BP stands for "block pool", a collection of blocks belonging to a single HDFS namespace. The next part 1308070615, is a randomly generated integer. The IP address is the address of the NameNode that originally created the block pool The last part is the creation time of the namespace. You can read more about this here; https://hortonworks.com/blog/hdfs-metadata-directories-explained/
... View more
08-16-2018
02:14 PM
You should be able to run from cli "dsquery user -name testhdp" to verify that you definitely have the right dn. 52e definitely points to the credentials, make sure you get the dn right, check that the account is not locked by opening its properties in AD and ensure you got the password for the account right when running setup-ldap initially.
... View more
08-16-2018
01:59 PM
@Bhushan Kandalkar AcceptSecurityContext error, data 52e, v2580
52e means invalid credentials. This is most likely down to a bad pass for the bind dn account, or perhaps the bind account you're using is locked.
... View more
08-16-2018
09:17 AM
1 Kudo
Hi @ibrahima diattara You should be able to use the below rest api example, and the "name" will be listed in the output. curl -i -X GET http://<your nifi host>:9090/nifi-api/processors/41ffd1f6-0165-1000-0000-00006974a11c
... View more