Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2625 | 11-01-2016 05:43 PM | |
| 8742 | 11-01-2016 05:36 PM | |
| 4925 | 07-01-2016 03:20 PM | |
| 8267 | 05-25-2016 11:36 AM | |
| 4434 | 05-24-2016 05:27 PM |
01-16-2016
07:14 PM
@Stefan Kupstaitis-Dunkler Official doc has the right answer. If it's not working and assuming that you did not miss anything then it's a bug. Not sure if downvote was the right action as answer is based on official docs. I will keep you posted in case I find more information.
... View more
01-16-2016
07:09 PM
@jeff Tagging Jeff. I believe we can open a jira to track this down @Stefan Kupstaitis-Dunkler
... View more
01-16-2016
06:58 PM
1 Kudo
@Lance Chen See this https://community.hortonworks.com/articles/6227/sandbox-1270018080-not-accessible.html
... View more
01-16-2016
06:57 PM
1 Kudo
@Stefan Kupstaitis-Dunkler See this http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/ch03s11.html Using a Custom Topology Script It is possible to not have Ambari manage the Rack information for hosts. Instead, you can use a custom topology script to provide rack information to HDFS and not use the Amabri-generated topology.py script. If you choose to manage Rack information on your own, you will need to create your own topology script and manage distributing the script to all hosts. Ambari will also not have any knowledge of host Rack information so heatmaps will not display by Rack in Ambari Web. To manage Rack information on your own, in the Services > HDFS > Configs , modify the net.topology.script.file.name property. Set this property value to your own custom topology script (for example /etc/hadoop/conf/topology.sh ). Distribute that topology script to your hosts and manage the Rack mapping information for your script outside of Ambari.
... View more
01-16-2016
03:50 PM
2 Kudos
@Gagan Dutt See if this helps https://community.hortonworks.com/articles/6227/sandbox-1270018080-not-accessible.html
... View more
01-16-2016
02:10 AM
1 Kudo
@Dave Woodruff Hi, Please see this guide http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_reference_guide/content/_using_oozie_with_postgresql.html Using Oozie with PostgreSQL To set up PostgreSQL for use with Oozie:
On the Ambari Server host, stage the appropriate PostgreSQL connector for later deployment.
Install the connector. RHEL/CentOS/Oracle Linux yum install postgresql-jdbc SLES zypper install -y postgresql-jdbc UBUNTU apt-get install -y postgresql-jdbc DEBIAN apt-get install -y postgresql-jdbc Confirm that .jar is in the Java share directory. ls /usr/share/java/postgresql-jdbc.jar Change the access mode of the .jar file to 644. chmod 644 /usr/share/java/postgresql-jdbc.jar Execute the following command: ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar Create a user for Oozie and grant it permissions.
Using the PostgreSQL database admin utility: echo "CREATE DATABASE <OOZIEDATABASE>;" | psql -U postgres echo "CREATE USER <OOZIEUSER> WITH PASSWORD '<OOZIEPASSWORD>';" | psql -U postgres echo "GRANT ALL PRIVILEGES ON DATABASE <OOZIEDATABASE> TO <OOZIEUSER>;" | psql -U postgres Where <OOZIEUSER> is the Oozie user name, <OOZIEPASSWORD> is the Oozie user password and <OOZIEDATABASE> is the Oozie database name.
... View more
01-16-2016
01:59 AM
5 Kudos
Original 1) Setup Azure account 2) Setup CloudBreak account Very important steps : Applies to Azure only Create a test network in Azure before you start creating cloudbreak credentials. In your local machine, run the following and accept default values. openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout azuretest.key -out azuretest.pem You will see 2 files as listed below. -rw-r--r-- 1 nsabharwal staff 1346 May 7 17:00 azuretest.pem --> We need this file to create credentials in cloudbreak. -rw-r--r-- 1 nsabharwal staff 1679 May 7 17:00 azuretest.key --> We need this to login into the host after cluster deployment. chmod 400 azuretest.key --> otherwise, you will receiver bad permission error for example: ssh -i azuretest.key ubuntu@<server> Very important: check your openssl version and if it's latest version then run the following and use azuretest_login.key to login openssl rsa -in azuretest.key-out azuretest_login.key hw11326:jumk nsabharwal$ openssl version OpenSSL 0.9.8zc 15 Oct 2014 Latest version of openssl creates .key with -----BEGIN PRIVATE KEY----- Old openssl creates keys with ( we need this) -----BEGIN RSA PRIVATE KEY----- Login to cloudbreak portal and create Azure credential Once you fill the information and hit create credentials then you will get a file from cloudbreak that needs to be uploaded into the Azure portal. I saved it as azuretest.cert Login to Azure portal ( switch to classic mode in case you are using new portal) click Settings --> Manage Certificates then upload the bottom of the screen. There are 2 more actions In CloudBreak windows 1) Create a template You can change the instance type & volume type as per your setup. 2) Create a blueprint - You can grab sample blueprints here ( You may have to format the blueprint in case there is any issue) Once all this done then you are all set to deploy the cluster select the credential and hit create cluster Create cluster window handy commands to login into docker login into your host ssh -i azuretest.key ubuntu@fqdn " New announcement: Just found out that user needs to be cloudbreak instead of ubuntu " ssh -i azuretest.key cloudbreak@fqdn Once you are in the shell , sudo su - docker ps docker exec -it <container id> bash [root@azuretest ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f493922cd629 sequenceiq/docker-consul-watch-plugn:1.7.0-consul "/start.sh" 2 hours ago Up 2 hours consul-watch 100e7c0b6d3d sequenceiq/ambari:2.0.0-consul "/start-agent" 2 hours ago Up 2 hours ambari-agent d05b85859031 sequenceiq/consul:v0.4.1.ptr "/bin/start -adverti 2 hours ago Up 2 hours consul [root@test~]# docker exec -it 100e7c0b6d3d bash bash-4.1# docker commands Happy Hadooping!!!! Note: For the latest information and changes, please see https://github.com/sequenceiq/cloudbreak Hadoop
Cloud Computing
Big Data
... View more
Labels:
01-15-2016
01:24 PM
1 Kudo
@marko It seems like that work is in progress https://github.com/apache/incubator-zeppelin/pull/53
... View more
01-15-2016
01:18 PM
2 Kudos
@jeff
I believe this is related to 2.2.0 and known bug. @David Yee https://issues.apache.org/jira/browse/AMBARI-14466
... View more
01-14-2016
09:38 PM
1 Kudo
@ashwin jayrama Good doc http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_spark-guide/content/ch_installing-spark.html For manual process http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/ch_installing_spark_chapter.html
... View more