Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2230 | 12-06-2018 12:25 PM | |
2289 | 11-27-2018 06:00 PM | |
1782 | 11-22-2018 03:42 PM | |
2842 | 11-20-2018 02:00 PM | |
5149 | 11-19-2018 03:24 PM |
10-10-2017
02:42 PM
Hi @Nikita Kiselev, You can get the active Yarn RM Id from zookeeer. cd /usr/hdp/current/zookeeper-client/bin
./zkCli.sh -server <zk-quorum>
[zk: localhost:2181(CONNECTED) 14]: get /yarn-leader-election/<cluster-id>/ActiveStandbyElectorLock Thanks, Aditya
... View more
10-10-2017
04:42 AM
@Ismael Boumedien, Glad that its working for you. Can you please accept the solution. This will be helpful for the community to find the correct answer easily. -Aditya
... View more
10-08-2017
07:28 AM
Hi @vishwanath pr, Try installing the HDFS clients and run the commands. yum install -y hadoop hadoop-hdfs hadoop-libhdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl Thanks, Aditya
... View more
10-08-2017
07:23 AM
2 Kudos
Hi @Karthick Raja, Try setting the value of enabled to 0 in sandbox.repo file. vi /etc/yum.repos.d/sandbox.repo Your sandbox.repo file should look like below # cat /etc/yum.repos.d/sandbox.repo
[sandbox]
baseurl=http://dev2.hortonworks.com.s3.amazonaws.com/repo/dev/master/utils/
name=Sandbox repository (tutorials)
gpgcheck=0
enabled=0 After making the change, run yum clean all Thanks, Aditya
... View more
10-08-2017
07:11 AM
3 Kudos
This article describes the steps to add Spectrum Scale service to HDP cluster. We will be using an existing ESS cluster. Pre-Requisites: 1) Download the Ambari Integration module and HDFS transparency connector. You can get it here. 2) Collect the details of the existing ESS cluster. (IPs /hostnames/ public key) 3) Install required packages yum -y install kernel-devel cpp gcc gcc-c++ binutils ksh libstdc++ libstd++-devel compact-libstdc++ imake make nc Note: Make sure that the kernel and kernel-devel version are the same. 4) Ensure that you setup password-less ssh from ambari-server to the ESS cluster nodes and from ESS node to the all the nodes in the cluster. 5) On the ambari server node, create a file called shared_gpfs_node.cfg under "/var/lib/ambari-server/resources/" directory and add the FQDN of any node in the ESS cluster. Make sure you add only one FQDN and password-less ssh is setup to this node from ambari server node. Note: Add the mapping in /etc/hosts for the FQDN above Installing the Ambari Integration Module: 1) Download and untar the Ambari Integration module in some directory on ambari server node. The directory consists of the following files SpectrumScaleIntegrationPackageInstaller-2.4.2.0.bin SpectrumScaleMPackInstaller.py SpectrumScaleMPackUninstaller.py SpectrumScale_UpgradeIntegrationPackage-BI425 (Required for IOP to HDP migration) 2) Stop all the services from Ambari. Login to Ambari -> Actions -> Stop All 3) Run the installer bin script and accept the license. It will prompt for few inputs which you have to enter. cd <dir where you have extracted the tar>
./SpectrumScaleIntegrationPackageInstaller-2.4.2.0.bin Once you have completed installing the Ambari Integration Module, you can proceed to Adding Spectrum Scale Service Adding IBM Spectrum Scale Service: 1) Login to Ambari. Click Actions -> Add Service 2) On Choose Services page, select "Spectrum Scale" and Click Next 3) On Assign Masters page, select where the GPFS Master has to be installed and Click Next. NOTE: GPFS Master has to be installed on the same node as ambari server node. 4) On Assign Slaves and Clients page, select the nodes where GPFS nodes have to be installed. On a minimum, It is recommended to install GPFS nodes on the nodes where Namenode(s) and Datanode(s) are running. Click next when you are done selecting. 5) On Customize services page,
You will be prompted to enter AMBARI_USER_PASSWORD and GPFS_REPO_URL which are Ambari password and the repo directory of where the IBM Spectrum Scale rpms are located respectively. If you are using a local repository, copy the HDFS transparency package downloaded in the 1st step of pre-requisites and put it in the directories where you have other RPMs present and run 'createrepo .' Check that GPFS Cluster Name, GPFS quorum nodes,GPFS File system name are populated with the existing ESS cluster details. Change the value of "gpfs.storage.type" to "shared". Ensure that gpfs.supergroup is set to "hadoop,root". Click Next after you are done. 6) If it is a Kerberized environment, you have to Configure Identities and Click Next 7) On the Review page, check the URLs and Click Deploy. 😎 Complete the further installation process by clicking Next. 9) Restart the ambari server by running the below command on Ambari server node. ambari-server restart Note: Do not restart the services before restarting ambari server. Post Installation Steps: 1) Login to Ambari and set the HDFS replication factor to 1. HDFS -> Configs -> Advanced -> General -> Block Replication 2) Restart all the services. Actions -> Start All 3) Once all the services are up, you may see "Namenode Last Checkpoint" alert on HDFS.This is because HDFS Transparency does not do the checkpointing because IBM Spectrum Scale is stateless. So you can disable the alert. Click on the Alert -> Disable. Additional References: https://developer.ibm.com/storage/2017/06/16/top-five-benefits-ibm-spectrum-scale-hortonworks-data-platform/ https://www.redbooks.ibm.com/redpapers/pdfs/redp5448.pdf https://community.hortonworks.com/content/kbentry/108565/ibm-spectrum-scale-423-certified-with-hdp-26-and-a.html Hope this helps 🙂
... View more
10-08-2017
05:23 AM
4 Kudos
Found out the solution. This is not a single line command though 1) Find out the mpack.staging.path by running cat /etc/ambari-server/conf/ambari.properties | grep -i mpack 2) Go to the mpack.staging.path directory (default is : /var/lib/ambari-server/resources/mpacks ) cd /var/lib/ambari-server/resources/mpacks 3) Iterate through all the directories in the mpack staging directory excpet the "cache" directory and read the mpack.json in each directory The mpack.json has the name of the mPack. You can write a small script to iterate through the directories and print all the mpack names Thanks, Aditya
... View more
10-06-2017
06:46 AM
Hi @Piyush Chauhan, This will be a practical exam where you will need to perform few tasks. There wont be multiple choice questions. You can check the objectives of the exam here . You can also write a practice exam before attempting the main one. Find the instructions for practice test here Thanks, Aditya
... View more
10-06-2017
06:29 AM
Hi @Sindhu, I'm sorry, my question was not so clear. I was trying to list all the installed mPacks in ambari which we usually install using ambari-server install-mpack <args>
... View more
10-06-2017
06:18 AM
1 Kudo
How can I list all the installed Mpacks in ambari. I want to install few mPacks and uninstall mPack asks for the mpack name. I want to check all the installed mpacks and uninstall few. Thanks, Aditya
... View more
Labels:
- Labels:
-
Apache Ambari
10-05-2017
11:23 AM
HI @Ismael Boumedien, I am sure that you are using /hbase-secure and not /hbase-unsecure. Can you please check if the zookeeper is active on localhost netstat -tupln | grep 2181 Can you try passing all the zookeeper quorum nodes. <zk1:2181>,<zk2:2181>.. Also, can you check if you are able to connect to phoenix using some client (ex : sqlline etc) cd /usr/hdp/current/phoenix-client/bin/
./sqlline.py localhost:2181:/hbase-secure Thanks, Aditya
... View more