Member since
11-07-2016
637
Posts
252
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1131 | 12-06-2018 12:25 PM | |
985 | 11-27-2018 06:00 PM | |
832 | 11-22-2018 03:42 PM | |
1568 | 11-20-2018 02:00 PM | |
2506 | 11-19-2018 03:24 PM |
10-06-2017
05:06 PM
Repo Description Automated installation of HDP using ansible using blueprints Repo Info Github Repo URL https://github.com/gautamborad/hdp-ansible Github account name gautamborad Repo name hdp-ansible
... View more
- Find more articles tagged with:
- ansible
- Cloud & Operations
- Installation
- utilities
10-06-2017
06:46 AM
Hi @Piyush Chauhan, This will be a practical exam where you will need to perform few tasks. There wont be multiple choice questions. You can check the objectives of the exam here . You can also write a practice exam before attempting the main one. Find the instructions for practice test here Thanks, Aditya
... View more
10-06-2017
06:29 AM
Hi @Sindhu, I'm sorry, my question was not so clear. I was trying to list all the installed mPacks in ambari which we usually install using ambari-server install-mpack <args>
... View more
10-06-2017
06:18 AM
1 Kudo
How can I list all the installed Mpacks in ambari. I want to install few mPacks and uninstall mPack asks for the mpack name. I want to check all the installed mpacks and uninstall few. Thanks, Aditya
... View more
Labels:
- Labels:
-
Apache Ambari
10-05-2017
05:11 PM
Hi @n c, You can install all the clients manually. Check the doc here I have the commands ready which you can use yum install -y hive-catalog
yum install -y hadoop hadoop-hdfs hadoop-libhdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl
yum install -y hbase
yum install -y phoenix
yum install -y accumulo
yum install -y zeppelin
yum install -y tez
yum install -y storm
yum install -y knox
yum install -y hive-catalog
yum install -y zookeeper-server
yum install -y pig
yum install -y hive-hcatalog hive-webhcat pig
yum install -y oozie oozie-client
yum install -y sqoop
yum install -y mahout
yum install -y flume flume-agent
yum install -y kafka
yum install -y falcon
yum search spark
yum install spark_<version>-master spark_<version>-python For Spark and Spark2 , it is a 2 step process, run yum search spark and replace the version in the 2nd command. Hope this helps. Thanks, Aditya
... View more
10-05-2017
01:59 PM
1 Kudo
@Adil Muganlinsky, Do you see any ACCEPTED applications in yarn. Thanks, Aditya
... View more
10-05-2017
11:23 AM
HI @Ismael Boumedien, I am sure that you are using /hbase-secure and not /hbase-unsecure. Can you please check if the zookeeper is active on localhost netstat -tupln | grep 2181 Can you try passing all the zookeeper quorum nodes. <zk1:2181>,<zk2:2181>.. Also, can you check if you are able to connect to phoenix using some client (ex : sqlline etc) cd /usr/hdp/current/phoenix-client/bin/
./sqlline.py localhost:2181:/hbase-secure Thanks, Aditya
... View more
10-05-2017
10:56 AM
Hi @Ismael Boumedien, Is the cluster kerberized ? If yes, then the connection string should be "jdbc:phoenix:localhost:2181:/hbase-secure:<keytab path>:<principal>" ex: "jdbc:phoenix:localhost:2181:/hbase-secure:/keytab/ISMAEL.keytab:ISMAEL@MOCK" In the commented code , I see that you were using /hbase-unsecure. Thanks, Aditya
... View more
10-05-2017
06:14 AM
Hi @Prakash Punj, Did you enable Ranger Hive plugin. Ranger -> Config -> Ranger plugin -> Hive Ranger plugin Thanks, Aditya
... View more
10-05-2017
05:57 AM
Hi @Neha G, How are trying to create files. I believe you are using webhdfs via knox to create files and do other operations. Did you try passing user.name query/permissions query parameter in your url while creating files. I guess you will not face the issue if the user passed in user.name has permissions to create the folder. Just give it a try. You can read more about webhdfs authentication and proxy users here Thanks, Aditya
... View more
10-04-2017
06:14 AM
@Triffids G, Can you please accept the answer. This will be helpful for the users to directly check the answer instead of reading the whole thread. Thanks, Aditya
... View more
10-04-2017
05:14 AM
Hi @Amey Hegde, Yes can do it. Under the host_groups and components , just add the list of clients you want to install. Below is the sample json {
"configurations": [
...
],
"host_groups": [
{
"components": [
{
"name": "HBASE_CLIENT"
}
],
"name": "host_group_1",
"cardinality": "1"
}
]
} You can also install the HBase and Phoenix clients manually. Please find the links https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_command-line-installation/content/ref-0e8815fa-d165-42b4-8b9d-b9a7d3fd4b7a.1.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_command-line-installation/content/installing_phoenix_rpm.html Thanks, Aditya
... View more
10-03-2017
10:48 AM
Hi @D Giri, Can you try re-generating the keytabs and check if it works. Ambari=>Admin=>Kerberos => Regenerate keytabs Thanks, Aditya
... View more
10-03-2017
09:30 AM
1 Kudo
Hi @Adil Muganlinsky, Click on the Query Id link for queries which are shown as running and search for Application ID and check if the application is still running. Kill the application (if you want to) if it is still running. Thanks, Aditya
... View more
10-03-2017
07:34 AM
Hi @uri ben-ari, Check if you are running Kafka with DEBUG mode. It can generate tons of logs. You can modify these settings under Kafka -> Configs -> Advanced Kafka-log4j set log4j.rootLogger=INFO, stdout Additionally check Kafka Controller Log: # of backup files, Kafka Controller Log: # of backup file size, Kafka Log: # of backup files, Kafka Log: # of backup file size Thanks, Aditya
... View more
10-02-2017
07:25 AM
4 Kudos
Hi @Triffids G, As mentioned by @mqureshi, remove the corrupted file first by running hdfs dfs -rm hdp/apps/2.6.0.3-8/spark2/spark2-hdp-yarn-archive.tar.gz You can build the proper tar file from the existing jars cd /usr/hdp/2.6.0.3-8/spark2/jars
# create tar file from existing jars
tar -czvf spark2-hdp-yarn-archive.tar.gz *
# put the new tar file in hdfs
hdfs dfs -put spark2-hdp-yarn-archive.tar.gz /hdp/apps/2.6.0.3-8/spark2 Thanks, Aditya
... View more
10-02-2017
06:53 AM
Hi @Vicente Ciampa, Did you export knox's gateway.jks file and put it in Ranger's SSO config? If not then follow the below steps 1) Export the cert. Run the following command in Knox host $JAVA_HOME/bin/keytool -export -alias gateway-identity -rfc -file cert.pem -keystore /usr/hdp/current/knox-server/data/security/keystores/gateway.jks Enter the knox master password when it prompts for password. Copy the contents of cert.pem 2) Go to Ranger -> Configs -> Advanced -> Knox SSO Settings. Under SSO public key , paste the contents of cert.pem. Save the config and restart Ranger. Thanks, Aditya
... View more
10-01-2017
12:04 PM
9 Kudos
This article describes how to add a new custom script and execute the script through Ambari. In this article, I took a simple use case to install kerberos packages through custom scripts. 1) Below is the code to install kerberos packages (install_kerberos_package.py) #!/usr/bin/env python
from resource_management import Script, Execute, format
from ambari_commons.os_check import OSCheck
from resource_management.core import shell
from resource_management.core.logger import Logger
class InstallKerberosPackage(Script):
def actionexecute(self, env):
config = Script.get_config()
structured_output = {}
cmd = self.get_install_cmd()
Logger.info("Installing Kerberos Package")
code, output = shell.call(cmd, sudo = True)
if 0 == code:
structured_output["install_kerberos_package"] = {"exit_code" : 0, "message": format("Packages installed successfully")}
else:
structured_output["install_kerberos_package"] = {"exit_code": code, "message": "Failed to install packages! {0}".format(str(output))}
self.put_structured_out(structured_output)
def get_install_cmd(self):
if OSCheck.is_redhat_family():
Logger.info("Installing kerberos package for the RedHat OS family");
return ("/usr/bin/yum", "install", "-y", "krb5-server", "krb5-libs", "krb5-workstation")
elif OSCheck.is_suse_family():
Logger.info("Installing kerberos package for the SUSE OS family");
return ('/usr/bin/zypper', '-n','install', 'krb5', 'krb5-server', 'krb5-client')
elif OSCheck.is_ubuntu_family():
Logger.info("Installing kerberos package for the Ubuntu OS family");
return ('/usr/bin/apt-get', 'install', '-y', 'krb5-kdc', 'krb5-admin-server')
else:
raise Exception("Unsupported OS family: '{0}' ".format(OSCheck.get_os_family()))
if __name__ == "__main__":
InstallKerberosPackage().execute() 2) Save the file with install_kerberos_package.py and put it in the custom actions folder of ambari and change the permissions cp install_kerberos_package.py /var/lib/ambari-server/resources/custom_actions/scripts/
chmod 755 /var/lib/ambari-server/resources/custom_actions/scripts/install_kerberos_package.py
chown root:root /var/lib/ambari-server/resources/custom_actions/scripts/install_kerberos_package.py 3) The next step is to add the definition for this action. Open the file system_action_definitions.xml located under "/var/lib/ambari-server/resources/custom_action_definitions/" folder using your favourite editor and add the below content alertDefinitions tag <actionDefinition>
<actionName>install_kerberos_package</actionName>
<actionType>SYSTEM</actionType>
<inputs></inputs>
<targetService/>
<targetComponent/>
<defaultTimeout>60</defaultTimeout>
<description>Install kerberos packages</description>
<targetType>ALL</targetType>
<permissions>HOST.ADD_DELETE_COMPONENTS, HOST.ADD_DELETE_HOSTS, SERVICE.ADD_DELETE_SERVICES</permissions>
</actionDefinition> Note: You can add comma separated inputs if you need any inputs for the script. 4) Now that the action is defined, restart ambari server for the changes to reflect. ambari-server restart 5) Check if your action is listed by calling the Ambari REST API. curl -u <username>:<password> "http://<ambari-host>:<ambari-port>/api/v1/actions" 6) Now you are ready to run the custom script which you have created. curl -u <username>:<password> -X POST -H 'X-Requested-By:ambari' -d'{"RequestInfo":{"context":"Execute an action", "action" : "install_kerberos_package", "service_name" : "", "component_name":"", "hosts":"<comma-separated-hosts>"}}' http://<ambari-host>:<ambari-port>/api/v1/clusters/<cluster-name>/requests You can check the output in the Ambari UI. Please check the screenshot(sample-output.png) for reference
... View more
- Find more articles tagged with:
- ambari-server
- custom-scripts
- custom_actions
- How-ToTutorial
- Sandbox & Learning
Labels:
10-01-2017
11:27 AM
Hi @raouia , You can execute the script using calling Ambari REST API curl -u admin:admin -X POST -H 'X-Requested-By:ambari' -d'{"RequestInfo":{"context":"Execute an action", "action" : "anaconda_install", "service_name" : "", "component_name":"", "hosts":"<comma separated hosts"}}' http://<ambari-host>:<ambari-port>/api/v1/clusters/<cluster-name>/requests Make sure you restart the ambari-server after adding the above XML def in system_action_definitions.xml. To check if your action is listed or not you do a GET call on http://<ambari-host>:<ambari-port>/api/v1/actions Thanks, Aditya
... View more
09-29-2017
12:04 PM
@Mahesh Thumar, As per the logs, looks like you have run out of memory. please increase the swap memory and try starting the namenode. Thanks, Aditya
... View more
09-29-2017
11:51 AM
@Sen Ke, can you please accept the answer if it worked for you. This will be helpful for the community. Thanks, Aditya
... View more
09-29-2017
07:02 AM
Hi @Sen Ke, Do you have webhdfs defined in your topology file? If you are using sandbox, open /usr/hdp/current/knox-server/conf/topologies/knox_sample.xml and add <service>
<role>WEBHDFS</role>
http://sandbox.hortonworks.com:50070/webhdfs
</service> If you are not using a sandbox, go to Knox->Configs->Advanced Config->Advanced topology and add <service>
<role>WEBHDFS</role>
http://<namenode-host>:<namenode-port>/webhdfs
</service> Thanks, Aditya
... View more
09-28-2017
03:35 PM
HI @Eon kitex, Did you restart ambari server after uninstalling zeppelin.
... View more
09-28-2017
02:29 PM
Hi @Rohit Ravishankar, spark 1.6 and spark 2 have a different hive warehouse. For spark 1.6 , warehouse location is identified by hive.metastore.warehouse.dir (default = /apps/hive/warehouse) For spark 2, warehouse location is identified by spark.sql.warehouse.dir (default = <user.dir>/spark-warehouse) I guess the user has right permissions to spark 1.6's warehouse location but not spark 2's location. Can you please check it and try giving proper permissions and run the query. Thanks, Aditya
... View more
09-27-2017
03:09 PM
1 Kudo
You have to run it as hdfs user Run su hdfs and run the chmod command Thanks, Aditya
... View more
09-27-2017
10:58 AM
2 Kudos
Hi @Mahesh Thumar, As per the logs it looks like a FQDN resolution issue to me .Please check if your fqdn/hostname matches with the hostnames given in ambari. Thanks, Aditya
... View more
09-27-2017
10:51 AM
@Mohammad Shazreen Bin Haini, ssh to the node where Ambari is installed / HDFS client is installed and you can run the command there. As @Peter Kim mentioned , check the permissions of those directories before changing. Thanks, Aditya
... View more
09-26-2017
07:14 PM
Hi @Akrem Latiwesh, Can you just try giving "/app/XTA0/hivedb" instead of hdfs:/// . It will pick the defaultFS from hadoop's core-site.xml. Thanks, Aditya
... View more
09-26-2017
06:58 PM
1 Kudo
Hi @Mohammad Shazreen Bin Haini, Your query is trying to write to /apps location to which "hive" user doesn't have permission. Change the permission of the folder and try running the query. hdfs dfs -chmod 777 /apps Thanks, Aditya
... View more
09-26-2017
06:51 PM
1 Kudo
@yvora, Make the 2nd curl call to the Location which was returned in the first call. ie., http://xxx:50075/webhdfs/v1/tmp/testa/a.txt?op=CREATE&user.name=livy&namenoderpcaddress=xxx:8020&createflag=&createparent=true&overwrite=false
... View more
- « Previous
- Next »