Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2365 | 12-06-2018 12:25 PM | |
2427 | 11-27-2018 06:00 PM | |
1861 | 11-22-2018 03:42 PM | |
3012 | 11-20-2018 02:00 PM | |
5391 | 11-19-2018 03:24 PM |
10-26-2017
06:11 PM
@uri ben-ari, This value is calculated using the stack advisor 'yarn.nodemanager.resource.memory-mb' = int(round(min(clusterData['containers']* clusterData['ramPerContainer'], nodemanagerMinRam))))
... View more
10-25-2017
07:33 AM
@ANSARI FAHEEM AHMED Can you please let us know how you created that user & HDFS Directory (The exact command that you used) Or you used some other tool / Java code to do that? Or if you used any AD/LDAP to sync the users?
... View more
02-14-2018
09:45 AM
You can use the below snippet. But you need to run stack_advisor once using normal flow under ambari server /var/lib/ambari-server/resources/scripts/stack_advisor.py recommend-configurations /var/run/ambari-server/stack-recommendations/1/hosts.json /var/run/ambari-server/stack-recommendations/1/services.json
... View more
10-16-2017
06:49 PM
@Ivan Majnaric, There is no harm in running sqlline.py again. Actually it is a client to query phoenix. It will create the SYSTEM tables if not already created. You can check this answer by Josh in the link. https://community.hortonworks.com/questions/64005/phoenix-security-and-initial-system-table-creation.html If this works for you please mark the answer as accepted so that it will be useful for the community. Thanks, Aditya
... View more
10-13-2017
10:38 AM
3 Kudos
1) Download the Ambari Integration module and HDFS transparency connector. You can get it here. 2) Stop all the services from Ambari. Ambari -> Actions -> Stop All 3) After successfully stopping all the services, Go to Spectrum Scale service and Unintegrate transparency. SpectrumScale -> Service Actions -> Unintegrate Transparency This step will replace the HDFS transparency modules with native HDFS and add back Secondary Namenode. 4) Delete the Spectrum Scale service. Type "delete" to confirm deletion. SpectrumScale -> Service Actions -> Delete Service 5) Extract the tar file downloaded in Step 1 in your Ambari Server node and run the mPack uninstaller. ./SpectrumScaleMPackUninstaller.py The uninstaller prompts for few values such as Ambari IP, username etc. Enter them. 6) Performing above steps will delete the service from Ambari. Restart Ambari server and Start all services. Ambari -> Actions -> Start All If you were using a existing ESS clusters, the nodes would be still part of the ESS cluster. Remove the nodes from ESS cluster as well. To remove the nodes, perform below steps. 1) On any of the ESS cluster node, run mmlscluster [root@xxx-ems1 ~]# mmlscluster
GPFS cluster information
========================
GPFS cluster name: xxx.yyy.net
GPFS cluster id: 123123123123132123123
GPFS UID domain: xxx.yyy.net
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
Node Daemon node name IP address Admin node name Designation
------------------------------------------------------------------------------------------------------------------------------------------
1 xxx-yyy1-hs.gpfs.net 10.10.99.11 xxx-yyy1-hs.gpfs.net quorum-manager-perfmon
2 xxx-yyy2-hs.gpfs.net 10.10.99.12 xxx-yyy2-hs.gpfs.net quorum-manager-perfmon
3 xxx-ems1-hs.gpfs.net 10.10.99.13 xxx-ems1-hs.gpfs.net quorum-perfmon
62 hdp-gpds-node-6.openstacklocal 10.101.76.62 hdp-gpds-node-6.openstacklocal
63 hdp-gpds-node-5.openstacklocal 10.101.76.40 hdp-gpds-node-5.openstacklocal
64 hdp-gpds-node-4.openstacklocal 10.101.76.60 hdp-gpds-node-4.openstacklocal This is the sample response for mmcluster commands. In the above output, Node 1,2 and 3 are the ESS nodes and Node 62,63 and 64 are the HDP cluster nodes which were enrolled into the ESS cluster. We will go ahead and delete the HDP nodes. 2) Delete the node by running the below command mmdelnode -N hdp-gpds-node-6.openstacklocal Run this command for every HDP node. After deleting all nodes you can confirm if the nodes are really deleted by running "mmlscluster" again. Hope this helps 🙂
... View more
10-13-2017
03:03 PM
Sorry unable to find it.
... View more
10-13-2017
06:59 AM
Aditya, I got this error at Zeppelin UI
... View more
10-11-2017
12:38 PM
@Aditya Sirna That's it. Thank you so much.
... View more
10-18-2017
07:39 AM
few minuets before i saw this post i just successfully solved the problem, i had two issues
one i did not create hive db CREATE DATABASE hive;
i base it on your post from https://community.hortonworks.com/answers/107905/view.html
another issue i had i in the db url connection,
i change it, to localhost. i am trying to accept your answer but i cant, don't have a button for it?
next stage is to try it with non root install
... View more
10-08-2017
07:11 AM
3 Kudos
This article describes the steps to add Spectrum Scale service to HDP cluster. We will be using an existing ESS cluster. Pre-Requisites: 1) Download the Ambari Integration module and HDFS transparency connector. You can get it here. 2) Collect the details of the existing ESS cluster. (IPs /hostnames/ public key) 3) Install required packages yum -y install kernel-devel cpp gcc gcc-c++ binutils ksh libstdc++ libstd++-devel compact-libstdc++ imake make nc Note: Make sure that the kernel and kernel-devel version are the same. 4) Ensure that you setup password-less ssh from ambari-server to the ESS cluster nodes and from ESS node to the all the nodes in the cluster. 5) On the ambari server node, create a file called shared_gpfs_node.cfg under "/var/lib/ambari-server/resources/" directory and add the FQDN of any node in the ESS cluster. Make sure you add only one FQDN and password-less ssh is setup to this node from ambari server node. Note: Add the mapping in /etc/hosts for the FQDN above Installing the Ambari Integration Module: 1) Download and untar the Ambari Integration module in some directory on ambari server node. The directory consists of the following files SpectrumScaleIntegrationPackageInstaller-2.4.2.0.bin SpectrumScaleMPackInstaller.py SpectrumScaleMPackUninstaller.py SpectrumScale_UpgradeIntegrationPackage-BI425 (Required for IOP to HDP migration) 2) Stop all the services from Ambari. Login to Ambari -> Actions -> Stop All 3) Run the installer bin script and accept the license. It will prompt for few inputs which you have to enter. cd <dir where you have extracted the tar>
./SpectrumScaleIntegrationPackageInstaller-2.4.2.0.bin Once you have completed installing the Ambari Integration Module, you can proceed to Adding Spectrum Scale Service Adding IBM Spectrum Scale Service: 1) Login to Ambari. Click Actions -> Add Service 2) On Choose Services page, select "Spectrum Scale" and Click Next 3) On Assign Masters page, select where the GPFS Master has to be installed and Click Next. NOTE: GPFS Master has to be installed on the same node as ambari server node. 4) On Assign Slaves and Clients page, select the nodes where GPFS nodes have to be installed. On a minimum, It is recommended to install GPFS nodes on the nodes where Namenode(s) and Datanode(s) are running. Click next when you are done selecting. 5) On Customize services page,
You will be prompted to enter AMBARI_USER_PASSWORD and GPFS_REPO_URL which are Ambari password and the repo directory of where the IBM Spectrum Scale rpms are located respectively. If you are using a local repository, copy the HDFS transparency package downloaded in the 1st step of pre-requisites and put it in the directories where you have other RPMs present and run 'createrepo .' Check that GPFS Cluster Name, GPFS quorum nodes,GPFS File system name are populated with the existing ESS cluster details. Change the value of "gpfs.storage.type" to "shared". Ensure that gpfs.supergroup is set to "hadoop,root". Click Next after you are done. 6) If it is a Kerberized environment, you have to Configure Identities and Click Next 7) On the Review page, check the URLs and Click Deploy. 😎 Complete the further installation process by clicking Next. 9) Restart the ambari server by running the below command on Ambari server node. ambari-server restart Note: Do not restart the services before restarting ambari server. Post Installation Steps: 1) Login to Ambari and set the HDFS replication factor to 1. HDFS -> Configs -> Advanced -> General -> Block Replication 2) Restart all the services. Actions -> Start All 3) Once all the services are up, you may see "Namenode Last Checkpoint" alert on HDFS.This is because HDFS Transparency does not do the checkpointing because IBM Spectrum Scale is stateless. So you can disable the alert. Click on the Alert -> Disable. Additional References: https://developer.ibm.com/storage/2017/06/16/top-five-benefits-ibm-spectrum-scale-hortonworks-data-platform/ https://www.redbooks.ibm.com/redpapers/pdfs/redp5448.pdf https://community.hortonworks.com/content/kbentry/108565/ibm-spectrum-scale-423-certified-with-hdp-26-and-a.html Hope this helps 🙂
... View more
- « Previous
- Next »