Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
03-31-2022
02:03 PM
hello I have the errors mentioned in the description and I am executing the procedure but I get the following errors in make -f Makefile.unx [root@emkioqlnclo01 isa-l]# make -f Makefile.unx ---> Building erasure_code/gf_vect_mul_sse.asm x86_64 ---> Building erasure_code/gf_vect_mul_avx.asm x86_64 ---> Building erasure_code/gf_vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_2vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_3vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_4vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_5vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_6vect_dot_prod_sse.asm x86_64 ---> Building erasure_code/gf_2vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_3vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_4vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_5vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_6vect_dot_prod_avx.asm x86_64 ---> Building erasure_code/gf_2vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_3vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_4vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_5vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_6vect_dot_prod_avx2.asm x86_64 ---> Building erasure_code/gf_vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_2vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_3vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_4vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_5vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_6vect_mad_sse.asm x86_64 ---> Building erasure_code/gf_vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_2vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_3vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_4vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_5vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_6vect_mad_avx.asm x86_64 ---> Building erasure_code/gf_vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_2vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_3vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_4vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_5vect_mad_avx2.asm x86_64 ---> Building erasure_code/gf_6vect_mad_avx2.asm x86_64 ---> Building erasure_code/ec_multibinary.asm x86_64 multibinary.asm:283: error: expression syntax error multibinary.asm:359: error: expression syntax error make: *** [bin/ec_multibinary.o] Error 1
... View more
08-04-2018
02:41 AM
3 Kudos
Note : This feature is available from HDP 3.0 (Ambari 2.7)
Ambari 2.7 has a cool new feature where it is integrated with Swagger and you can try and explore all the REST APIs.
Steps to use Swagger
Login to Ambari
Hit this url ( http://{ambari-host}:8080/api-docs)
This page takes you to the API explorer where you can try different APIs. Here are some of the screenshots.
You can get all the supported endpoints from http://{ambari-host}:8080/api-docs/swagger.json)
.
Hope this helps 🙂
... View more
Labels:
09-11-2018
03:35 PM
Hi Aditya, You need to quote the schema here. It's a reserved word. This works. select * from "SYSTEM"."FUNCTION";
... View more
03-27-2018
08:38 AM
5 Kudos
In this article we will see how to produce messages using a simple python script and consume messages using ConsumeMQTT processor and put them in HDFS using PutHDFS Note: I'm using CentOS 7 and HDP 2.6.3 for this article . 1) Install MQTT sudo yum -y install epel-release
sudo yum -y install mosquitto . 2) Start MQTT sudo systemctl start mosquitto
sudo systemctl enable mosquitto . 3) Install paho-mqtt python library yum install python-pip
pip install paho-mqtt . 4) Configure MQTT password for the user. I have created a sample user 'aditya' and set the password to 'test' [root@test-instance-4 ~]# useradd aditya
[root@test-instance-4 ~]# sudo mosquitto_passwd -c /etc/mosquitto/passwd aditya
Password:
Reenter password: . 5) Disable anonymous login to MQTT Open the file (/etc/mosquitto/mosquitto.conf ) and add the below entries and restart mosquitto allow_anonymous false
password_file /etc/mosquitto/passwd sudo systemctl restart mosquitto . 6) Design the NiFi flow to consume messages and put into hdfs Configure MQTT processor: Right Click on ConsumeMQTT -> Configure -> Properties. Set Broker URI, Client Id, username, password, Topic filter and Max Queue Size Configure PutHDFS processor: Set Hadoop Configuration resources and Directory( to store messages) . 7) Create a sample python script to publish messages. Use mqttpublish.txt attached and rename it to MQTTPublish.py to publish messages . 😎 Run the Nifi flow. . 9) Run the python script attached. python MQTTPublish.py . 10) Check the directory to check if the messages are put in HDFS hdfs dfs -ls /user/aditya/
hdfs dfs -cat /user/aditya/* . Hope this helps 🙂 mqttpublish.txt
... View more
Labels:
04-19-2018
12:06 PM
This worked for me thanks 🙂 cheers !!
... View more
02-21-2018
09:01 AM
3 Kudos
Issue: When running hive shell in a docker you will be getting a message "mbind: Operation not permitted" printed on the console but the operations will pass. . Root Cause: mbind syscall is used for NUMA (non-uniform memory access) operations which is blocked by docker by default. But in hive opts there is an option which specifies '+UseNUMA'. . Resolution: Go to Ambari -> Hive -> Configs -> Advanced 1) Remove '-XX:+UseNUMA' from 'hive.tez.java.opts'. 2) Remove '-XX:+UseNUMA' from hive-env template. Hope this helps 🙂
... View more
Labels:
02-14-2018
04:13 PM
Hey, I am unable to change the zeppelin config for some reason. I tried to edit the zeppelin-site.xml.template file in the conf folder, but the changes aren't reflecting in the zepplin even after restarting. Am i missing something?
... View more
07-12-2018
01:36 PM
@Aditya : Thank, this is really informative
... View more
10-13-2017
10:38 AM
3 Kudos
1) Download the Ambari Integration module and HDFS transparency connector. You can get it here. 2) Stop all the services from Ambari. Ambari -> Actions -> Stop All 3) After successfully stopping all the services, Go to Spectrum Scale service and Unintegrate transparency. SpectrumScale -> Service Actions -> Unintegrate Transparency This step will replace the HDFS transparency modules with native HDFS and add back Secondary Namenode. 4) Delete the Spectrum Scale service. Type "delete" to confirm deletion. SpectrumScale -> Service Actions -> Delete Service 5) Extract the tar file downloaded in Step 1 in your Ambari Server node and run the mPack uninstaller. ./SpectrumScaleMPackUninstaller.py The uninstaller prompts for few values such as Ambari IP, username etc. Enter them. 6) Performing above steps will delete the service from Ambari. Restart Ambari server and Start all services. Ambari -> Actions -> Start All If you were using a existing ESS clusters, the nodes would be still part of the ESS cluster. Remove the nodes from ESS cluster as well. To remove the nodes, perform below steps. 1) On any of the ESS cluster node, run mmlscluster [root@xxx-ems1 ~]# mmlscluster
GPFS cluster information
========================
GPFS cluster name: xxx.yyy.net
GPFS cluster id: 123123123123132123123
GPFS UID domain: xxx.yyy.net
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
Node Daemon node name IP address Admin node name Designation
------------------------------------------------------------------------------------------------------------------------------------------
1 xxx-yyy1-hs.gpfs.net 10.10.99.11 xxx-yyy1-hs.gpfs.net quorum-manager-perfmon
2 xxx-yyy2-hs.gpfs.net 10.10.99.12 xxx-yyy2-hs.gpfs.net quorum-manager-perfmon
3 xxx-ems1-hs.gpfs.net 10.10.99.13 xxx-ems1-hs.gpfs.net quorum-perfmon
62 hdp-gpds-node-6.openstacklocal 10.101.76.62 hdp-gpds-node-6.openstacklocal
63 hdp-gpds-node-5.openstacklocal 10.101.76.40 hdp-gpds-node-5.openstacklocal
64 hdp-gpds-node-4.openstacklocal 10.101.76.60 hdp-gpds-node-4.openstacklocal This is the sample response for mmcluster commands. In the above output, Node 1,2 and 3 are the ESS nodes and Node 62,63 and 64 are the HDP cluster nodes which were enrolled into the ESS cluster. We will go ahead and delete the HDP nodes. 2) Delete the node by running the below command mmdelnode -N hdp-gpds-node-6.openstacklocal Run this command for every HDP node. After deleting all nodes you can confirm if the nodes are really deleted by running "mmlscluster" again. Hope this helps 🙂
... View more
10-08-2017
07:11 AM
3 Kudos
This article describes the steps to add Spectrum Scale service to HDP cluster. We will be using an existing ESS cluster. Pre-Requisites: 1) Download the Ambari Integration module and HDFS transparency connector. You can get it here. 2) Collect the details of the existing ESS cluster. (IPs /hostnames/ public key) 3) Install required packages yum -y install kernel-devel cpp gcc gcc-c++ binutils ksh libstdc++ libstd++-devel compact-libstdc++ imake make nc Note: Make sure that the kernel and kernel-devel version are the same. 4) Ensure that you setup password-less ssh from ambari-server to the ESS cluster nodes and from ESS node to the all the nodes in the cluster. 5) On the ambari server node, create a file called shared_gpfs_node.cfg under "/var/lib/ambari-server/resources/" directory and add the FQDN of any node in the ESS cluster. Make sure you add only one FQDN and password-less ssh is setup to this node from ambari server node. Note: Add the mapping in /etc/hosts for the FQDN above Installing the Ambari Integration Module: 1) Download and untar the Ambari Integration module in some directory on ambari server node. The directory consists of the following files SpectrumScaleIntegrationPackageInstaller-2.4.2.0.bin SpectrumScaleMPackInstaller.py SpectrumScaleMPackUninstaller.py SpectrumScale_UpgradeIntegrationPackage-BI425 (Required for IOP to HDP migration) 2) Stop all the services from Ambari. Login to Ambari -> Actions -> Stop All 3) Run the installer bin script and accept the license. It will prompt for few inputs which you have to enter. cd <dir where you have extracted the tar>
./SpectrumScaleIntegrationPackageInstaller-2.4.2.0.bin Once you have completed installing the Ambari Integration Module, you can proceed to Adding Spectrum Scale Service Adding IBM Spectrum Scale Service: 1) Login to Ambari. Click Actions -> Add Service 2) On Choose Services page, select "Spectrum Scale" and Click Next 3) On Assign Masters page, select where the GPFS Master has to be installed and Click Next. NOTE: GPFS Master has to be installed on the same node as ambari server node. 4) On Assign Slaves and Clients page, select the nodes where GPFS nodes have to be installed. On a minimum, It is recommended to install GPFS nodes on the nodes where Namenode(s) and Datanode(s) are running. Click next when you are done selecting. 5) On Customize services page,
You will be prompted to enter AMBARI_USER_PASSWORD and GPFS_REPO_URL which are Ambari password and the repo directory of where the IBM Spectrum Scale rpms are located respectively. If you are using a local repository, copy the HDFS transparency package downloaded in the 1st step of pre-requisites and put it in the directories where you have other RPMs present and run 'createrepo .' Check that GPFS Cluster Name, GPFS quorum nodes,GPFS File system name are populated with the existing ESS cluster details. Change the value of "gpfs.storage.type" to "shared". Ensure that gpfs.supergroup is set to "hadoop,root". Click Next after you are done. 6) If it is a Kerberized environment, you have to Configure Identities and Click Next 7) On the Review page, check the URLs and Click Deploy. 😎 Complete the further installation process by clicking Next. 9) Restart the ambari server by running the below command on Ambari server node. ambari-server restart Note: Do not restart the services before restarting ambari server. Post Installation Steps: 1) Login to Ambari and set the HDFS replication factor to 1. HDFS -> Configs -> Advanced -> General -> Block Replication 2) Restart all the services. Actions -> Start All 3) Once all the services are up, you may see "Namenode Last Checkpoint" alert on HDFS.This is because HDFS Transparency does not do the checkpointing because IBM Spectrum Scale is stateless. So you can disable the alert. Click on the Alert -> Disable. Additional References: https://developer.ibm.com/storage/2017/06/16/top-five-benefits-ibm-spectrum-scale-hortonworks-data-platform/ https://www.redbooks.ibm.com/redpapers/pdfs/redp5448.pdf https://community.hortonworks.com/content/kbentry/108565/ibm-spectrum-scale-423-certified-with-hdp-26-and-a.html Hope this helps 🙂
... View more