Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2625 | 11-01-2016 05:43 PM | |
| 8759 | 11-01-2016 05:36 PM | |
| 4925 | 07-01-2016 03:20 PM | |
| 8267 | 05-25-2016 11:36 AM | |
| 4434 | 05-24-2016 05:27 PM |
01-19-2016
01:49 AM
1 Kudo
@Necip Gengeç
host=hadoop01, exitcode=1 Commandend time 2016-01-1901:08:18 ERROR:Bootstrap of host hadoop01 fails because previous action finished with non-zero exit code (1) ERROR MESSAGE:Connection to hadoop01 closed. STDOUT: E:No packages found E:No packages found ('',None) I would try to install ambari-agent manually & see if it works or not "just to test" yum install ambari-agent Re: duplicate entries ..see this http://askubuntu.com/questions/456321/duplicate-sources-list-entry-ubuntu-14-04
... View more
01-19-2016
01:18 AM
@Luis Antonio Torres Thanks! Having sqlserver in different vlan increases the probability of network issue.
... View more
01-19-2016
01:15 AM
6 Kudos
Linkedin Post Presto is a tool designed to efficiently query vast amounts of data using distributed queries. We will be installing Presto in single server mode, Access Hive and then add worker node. Cross query - RBDMS, Hive, NoSql Tutorial **Java 8 must ** Install - link (for the latest versions) wget https://repo1.maven.org/maven2/com/facebook/presto/presto-server/0.122/presto-server-0.122.tar.gz tar xvfz presto-server-0.122.tar.gz Let's start with Single node setup (master and worker on the same node) cd presto-server-0.122 mkdir etc [root@ns2 presto-server-0.122]# cd etc/ mkdir catalog and we will create 3 files as shown below [root@ns2 etc]# ls catalog config.properties jvm.config log.properties node.properties [root@ns etc]# cat config.properties coordinator=true node-scheduler.include-coordinator=true http-server.http.port=9080 query.max-memory=10GB query.max-memory-per-node=1GB discovery-server.enabled=true discovery.uri=http://ns2:9080 [root@ns2 etc]# cat log.properties com.facebook.presto=INFO [root@ns2 etc]# cat node.properties node.environment=production node.id=presto1 node.data-dir=/var/presto/data Details on the properties are here Now , let's create hive properties file (I have create hive.properties already) cd catalog/ [root@ns2 catalog]# ls hive.properties jmx.properties [root@ns2 catalog]# cat hive.properties connector.name=hive-hadoop2 hive.metastore.uri=thrift://ns3:9083 hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml All set to start presto server [root@ns2 bin]# pwd /root/presto-server-0.122/bin [root@ns2 bin]# nohup ./launcher run & [1] 11722 [root@ns2 bin]# nohup: ignoring input and appending output to `nohup.out' [root@ns2 bin]# tail -f nohup.out last line will be 2015-10-18T16:49:49.935-0400 INFO main com.facebook.presto.metadata.CatalogManager -- Added catalog hive using connector hive-hadoop2 -- 2015-10-18T16:49:50.005-0400 INFO main com.facebook.presto.server.PrestoServer ======== SERVER STARTED ======== hit http://host:9080 Let's access Hive tables Download presto cli ( link for the latest release) mv presto-cli-0.122-executable.jar presto [root@ns2 bin]# ./presto --server ns2:9080 --catalog hive presto> show tables from default; Create a table in Hive Presto UI click one of the queries to check the stats. Click Execution link to get execution plan Let's add worker node and remove master from the worker Node name - ns4 Repeat installation steps in new node as mentioned above then make following changes /root/presto-server-0.122/etc [root@ns4 etc]# cat config.properties coordinator=false http-server.http.port=9080 query.max-memory=10GB query.max-memory-per-node=1GB discovery.uri=http://ns2:9080 (It points to master server) [root@ns4 etc]# cat node.properties (node.id needs to be unique) node.environment=production node.id=presto2 node.data-dir=/var/presto/data [root@ns4 etc]# cd .. [root@ns4 presto-server-0.122]# cd bin/ [root@ns4 bin]# nohup ./launcher run & [root@ns02 bin]# ./presto --server ns2:9080 --catalog hive Happy Hadooping!!! Read Presto: Interacting with petabytes of data at Facebook
... View more
Labels:
01-19-2016
01:10 AM
@Mark Petronic
... View more
01-18-2016
08:58 PM
Please accept the best answer to close the thread @niraj nagle
... View more
01-18-2016
08:42 PM
@niraj nagle You can put those services in the maintenance mode.
... View more
01-18-2016
08:05 PM
1 Kudo
@Xi Sanderson See this thread https://community.hortonworks.com/questions/9762/how-to-get-rid-of-stale-alerts-in-ambari.html
... View more
01-18-2016
08:04 PM
@Kaushik Dutta and it's fixed in 2.2.1 🙂
... View more
01-18-2016
07:59 PM
2 Kudos
@Mark Petronic Please see this to start with http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/create_directories_on_hdfs.html
... View more
01-18-2016
07:53 PM
@Mark Petronic Support will webex and check for configs and other settings related to AMS.
... View more