Member since
‎11-07-2016
637
Posts
253
Kudos Received
144
Solutions
‎10-16-2018
03:27 AM
2 Kudos
If you have erasure coded some directory and perform some operations on the directory you might have observed WARN messages like below WARN erasurecode.ErasureCodeNative: Loading ISA-L failed: Failed to load libisal.so.2 (libisal.so.2: cannot open shared object file: No such file or directory)
WARN erasurecode.ErasureCodeNative: ISA-L support is not available in your platform... using builtin-java codec where applicable This WARN messages are due to ISA Library not being present on the node. Below are the steps to enable the library 1) Clone the isa-l github repository. # git clone https://github.com/01org/isa-l.git 2) Go to the cloned directory # cd isa-l 3) Install yasm if you do not have it already # yum install -y yasm ---> centOS
# apt-get install yasm ----> ubuntu 4) Build the library # make -f Makefile.unx 5) Copy the library files to lib directory # cp bin/libisal.so bin/libisal.so.2 /lib64 6) Verify that isa-l library is enabled properly # hadoop checknative
Expected output
18/10/12 10:20:03 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
18/10/12 10:20:03 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/hdp/3.0.0.0-1634/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
zstd : false
snappy: true /usr/hdp/3.0.0.0-1634/hadoop/lib/native/libsnappy.so.1
lz4: true revision:10301
bzip2: true /lib64/libbz2.so.1
openssl: true /lib64/libcrypto.so
ISA-L: true /lib64/libisal.so.2 -------------> Shows that ISA-L is loaded.
If step 6 uses /usr/lib64 directory instead of /lib64, you need copy the .so files in Step 5 to /usr/lib64 directory. Perform the steps on all datanode and namenode hosts or copy the .so files from the above node to /lib64 directories of all other nodes. . Hope this helps 🙂
... View more
Labels:
‎08-04-2018
02:41 AM
3 Kudos
Note : This feature is available from HDP 3.0 (Ambari 2.7)
Ambari 2.7 has a cool new feature where it is integrated with Swagger and you can try and explore all the REST APIs.
Steps to use Swagger
Login to Ambari
Hit this url ( http://{ambari-host}:8080/api-docs)
This page takes you to the API explorer where you can try different APIs. Here are some of the screenshots.
You can get all the supported endpoints from http://{ambari-host}:8080/api-docs/swagger.json)
.
Hope this helps 🙂
... View more
Labels:
‎03-29-2018
10:08 AM
1 Kudo
Failed to list function from phoenix sqlline.py Command run : select * from SYSTEM.FUNCTION; Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "NAME", got "FUNCTION" at line 1, column 22. (state=42P00,code=604)
org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "NAME", got "FUNCTION" at line 1, column 22.
at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
at org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MismatchedTokenException(65!=99)
at org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:360)
at org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at org.apache.phoenix.parse.PhoenixSQLParser.parseNoReserved(PhoenixSQLParser.java:9986)
at org.apache.phoenix.parse.PhoenixSQLParser.identifier(PhoenixSQLParser.java:9953)
at org.apache.phoenix.parse.PhoenixSQLParser.from_table_name(PhoenixSQLParser.java:9606)
at org.apache.phoenix.parse.PhoenixSQLParser.table_factor(PhoenixSQLParser.java:6261)
at org.apache.phoenix.parse.PhoenixSQLParser.table_ref(PhoenixSQLParser.java:6083)
at org.apache.phoenix.parse.PhoenixSQLParser.table_list(PhoenixSQLParser.java:6019)
at org.apache.phoenix.parse.PhoenixSQLParser.parseFrom(PhoenixSQLParser.java:5984)
at org.apache.phoenix.parse.PhoenixSQLParser.single_select(PhoenixSQLParser.java:4612)
at org.apache.phoenix.parse.PhoenixSQLParser.unioned_selects(PhoenixSQLParser.java:4714)
at org.apache.phoenix.parse.PhoenixSQLParser.select_node(PhoenixSQLParser.java:4780)
at org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:789)
at org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:508)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 9 more Issue: FUNCTION is a key word so this command doesn't work. Resolution : Run select * from SYSTEM."FUNCTION"; to list functions
... View more
Labels:
‎03-27-2018
08:38 AM
5 Kudos
In this article we will see how to produce messages using a simple python script and consume messages using ConsumeMQTT processor and put them in HDFS using PutHDFS Note: I'm using CentOS 7 and HDP 2.6.3 for this article . 1) Install MQTT sudo yum -y install epel-release
sudo yum -y install mosquitto . 2) Start MQTT sudo systemctl start mosquitto
sudo systemctl enable mosquitto . 3) Install paho-mqtt python library yum install python-pip
pip install paho-mqtt . 4) Configure MQTT password for the user. I have created a sample user 'aditya' and set the password to 'test' [root@test-instance-4 ~]# useradd aditya
[root@test-instance-4 ~]# sudo mosquitto_passwd -c /etc/mosquitto/passwd aditya
Password:
Reenter password: . 5) Disable anonymous login to MQTT Open the file (/etc/mosquitto/mosquitto.conf ) and add the below entries and restart mosquitto allow_anonymous false
password_file /etc/mosquitto/passwd sudo systemctl restart mosquitto . 6) Design the NiFi flow to consume messages and put into hdfs Configure MQTT processor: Right Click on ConsumeMQTT -> Configure -> Properties. Set Broker URI, Client Id, username, password, Topic filter and Max Queue Size Configure PutHDFS processor: Set Hadoop Configuration resources and Directory( to store messages) . 7) Create a sample python script to publish messages. Use mqttpublish.txt attached and rename it to MQTTPublish.py to publish messages . 😎 Run the Nifi flow. . 9) Run the python script attached. python MQTTPublish.py . 10) Check the directory to check if the messages are put in HDFS hdfs dfs -ls /user/aditya/
hdfs dfs -cat /user/aditya/* . Hope this helps 🙂 mqttpublish.txt
... View more
Labels:
‎03-13-2018
08:51 AM
1 Kudo
Issue: Knox Gateway fails to start with "org.apache.hadoop.gateway.services.security.KeystoreServiceException: java.io.IOException: Keystore was tampered with, or password was incorrect" Below are the startup logs. 2018-03-13 05:17:47,189 INFO hadoop.gateway (GatewayServer.java:logSysProp(193)) - System Property: user.name=knox
2018-03-13 05:17:47,193 INFO hadoop.gateway (GatewayServer.java:logSysProp(193)) - System Property: user.dir=/var/lib/knox
2018-03-13 05:25:26,853 INFO hadoop.gateway (GatewayServer.java:logSysProp(193)) - System Property: java.runtime.name=OpenJDK Runtime Environment
2018-03-13 05:25:26,853 INFO hadoop.gateway (GatewayServer.java:logSysProp(193)) - System Property: java.runtime.version=1.8.0_131-b11
2018-03-13 05:25:26,854 INFO hadoop.gateway (GatewayServer.java:logSysProp(193)) - System Property: java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-2.b11.el7_3.x86_64/jre
2018-03-13 05:25:27,230 INFO hadoop.gateway (GatewayConfigImpl.java:loadConfigResource(322)) - Loading configuration resource jar:file:/usr/hdp/2.5.5.0-157/knox/bin/../lib/gateway-server-0.9.0.2.5.5.0-157.jar!/conf/gateway-default.xml
2018-03-13 05:25:27,244 INFO hadoop.gateway (GatewayConfigImpl.java:loadConfigFile(310)) - Loading configuration file /usr/hdp/2.5.5.0-157/knox/bin/../conf/gateway-site.xml
2018-03-13 05:25:27,302 INFO hadoop.gateway (GatewayConfigImpl.java:initGatewayHomeDir(254)) - Using /usr/hdp/2.5.5.0-157/knox/bin/.. as GATEWAY_HOME via system property.
2018-03-13 05:25:28,000 ERROR hadoop.gateway (BaseKeystoreService.java:getKeystore(161)) - Failed to load keystore [filename=__gateway-credentials.jceks, type=JCEKS]: java.io.IOException: Keystore was tampered with, or password was incorrect
2018-03-13 05:25:28,000 ERROR hadoop.gateway (DefaultAliasService.java:getPasswordFromAliasForCluster(100)) - Failed to get credential for cluster __gateway: org.apache.hadoop.gateway.services.security.KeystoreServiceException: java.io.IOException: Keystore was tampered with, or password was incorrect
2018-03-13 05:25:28,001 FATAL hadoop.gateway (GatewayServer.java:main(151)) - Failed to start gateway: org.apache.hadoop.gateway.services.ServiceLifecycleException: Provisioned signing key passphrase cannot be acquired. . Root cause: Keystore file was corrupted. . Resolution: Move the corrupted files to a temp directory and restart Knox. Knox will create the files again and restart will be successful. ssh knoxhost
mkdir /tmp/keystores
mv /usr/hdp/current/knox-server/data/security/keystores/* /tmp/keystores Hope this helps 🙂
... View more
Labels:
‎02-21-2018
09:01 AM
3 Kudos
Issue: When running hive shell in a docker you will be getting a message "mbind: Operation not permitted" printed on the console but the operations will pass. . Root Cause: mbind syscall is used for NUMA (non-uniform memory access) operations which is blocked by docker by default. But in hive opts there is an option which specifies '+UseNUMA'. . Resolution: Go to Ambari -> Hive -> Configs -> Advanced 1) Remove '-XX:+UseNUMA' from 'hive.tez.java.opts'. 2) Remove '-XX:+UseNUMA' from hive-env template. Hope this helps 🙂
... View more
Labels:
‎01-20-2018
05:19 AM
1 Kudo
@Keith Swanson, You should pass your inputs inside parameters curl -u <username>:<password> -X POST -H 'X-Requested-By:ambari' -d'{"RequestInfo":{"context":"Execute my action", "action":"my_action", "parameters" : {"my_input" : "value"}}, "Requests/resource_filters":[{"service_name":"", "component_name":"", "hosts":"<comma_separated_host_names>"}]' http://<ambari_host>:<port>/api/v1/clusters/<cluster_name>/requests
... View more
‎11-29-2017
08:51 AM
4 Kudos
This article describes to store your zeppelin notes in a GitHub repo. 1) Create a GitHub repo where you want to store your zeppelin notes. I have created a repo named 'zeppelin-notes'. Select ssh for cloning. We will be using ssh key to perform git operations. 2) Copy the ssh public key from the node where Zeppelin is installed and add it in git hub. The command displays the public key and screenshot tells how to add the ssh key to github # su zeppelin
# cat ~/.ssh/id_rsa.pub Click on "New SSH Key" , give a title for the key and paste the content and click save. 3) Clone the git repo in the node where zeppelin is installed and set proper permissions to the folder su zeppelin
cd /usr/hdp/current/zeppelin-server/
git clone git@github.com:cvr-aditya/zeppelin-notes.git
chown zeppelin:zeppelin zeppelin-notes
chmod -R 777 zeppelin-notes 4) Change the zeppelin notebook directory path. Go to Ambari -> Zeppelin -> Configs -> Advanced zeppelin-config and set the value for zeppelin.notebook.dir to the git hub repo name (ie zeppelin-notes) 5) Change the storage for zeppelin. Go to Ambari -> Zeppelin -> Configs -> Advanced zeppelin-config and set the value for zeppelin.notebook.storage to org.apache.zeppelin.notebook.repo.GitNotebookRepo. Perform steps 4 and 5 and restart Zeppelin 6) Login to zeppelin and create a new Notebook. I named it 'FirstNote'. 7) After making some changes now you may want to commit the note. Click on the icon shown in the screenshot below and write your commit message and click Commit. 😎 Check if your commit was successful. cd /usr/hdp/current/zeppelin-server/zeppelin-notes
git log 9) You can check the state of the notebook at a particular commit by setting the revision to the particular commit. Check the screenshot. 10) You can push your commits to the repo by running the following commands cd /usr/hdp/current/zeppelin-server/zeppelin-notes
git push origin {branch-name}
Hope this helps 🙂 Refernce : https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/ Thanks, Aditya
... View more
Labels:
‎11-10-2017
04:15 PM
1 Kudo
You can follow the below steps to upgrade your Spectrum Scale transparency. 1. Download the latest HDFS Transparency connector (gpfs.hdfs-protocol.rpm) 2. Delete the old rpm from your local repository. There should be only one hdfs-protocol rpm in the RPMs directory. 3. Place the latest rpm downloaded in Step 2 in the directory. 4. Refresh the local repository to pick up the latest rpm. cd <local rpm repository dir>
createrepo . 5. Stop all services from ambari Ambari -> Actions -> Stop All 6. Upgrade transparency Ambari -> Spectrum scale -> Service Actions -> Upgrade Transparency 7. Restart ambari server ambari-server restart 8. Start all services from ambari Ambari -> Actions -> Start All You can check SpectrumScale installation here. Uninstallation here Download Transparency connector from here Hope this helps 🙂
... View more