Member since
03-18-2014
26
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
26000 | 03-31-2014 07:33 PM |
03-31-2014
06:33 PM
Express Cluster Installation - Add Hosts Express Cluster Installation - Add Services Choose the CDH 5 services that you want to install on your cluster. Choose a combination of services to install. Core Hadoop HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, and Sqoop Core with Real-Time Delivery HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and HBase Core with Real-Time Query HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and Impala Core with Real-Time Search HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and Solr Core with Spark HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and Spark All Services HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, HBase, Impala, Solr, Spark, and Keystore Indexer Custom Services Choose your own services. Services required by chosen services will automatically be included. Note that Flume can be added after your initial cluster has been set up. Back 1 2 3
... View more
03-31-2014
06:32 PM
Inspect hosts for correctness Run Again Validations Inspector ran on all 4 hosts. Individual hosts resolved their own hostnames correctly. No errors were found while looking for conflicting init scripts. No errors were found while checking /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts checked resolved each other's hostnames correctly and in a timely manner. Host clocks are approximately in sync (within ten minutes). Host time zones are consistent across the cluster. No users or groups are missing. No kernel versions that are known to be bad are running. All hosts have /proc/sys/vm/swappiness set to 0. No performance concerns with Transparent Huge Pages settings. 0 hosts are running CDH 4 and 4 hosts are running CDH5. All checked hosts in each cluster are running the same version of components. All managed hosts have consistent versions of Java. All checked Cloudera Management Daemons versions are consistent with the server. All checked Cloudera Management Agents versions are consistent with the server. Version Summary Cluster 1 — CDH 5 Hosts hadoop0, hadoop1, hadoop2, hadoopmngr Component Version CDH Version Bigtop-Tomcat (CDH 5 only) 0.7.0+cdh5.0.0+0 CDH5 Crunch (CDH 5 only) 0.9.0+cdh5.0.0+19 CDH5 Flume NG 1.4.0+cdh5.0.0+90 CDH5 MapReduce 1 2.2.0+cdh5.0.0+1610 CDH5 HDFS 2.2.0+cdh5.0.0+1610 CDH5 HttpFS 2.2.0+cdh5.0.0+1610 CDH5 MapReduce 2 2.2.0+cdh5.0.0+1610 CDH5 YARN 2.2.0+cdh5.0.0+1610 CDH5 Hadoop 2.2.0+cdh5.0.0+1610 CDH5 Lily HBase Indexer 1.3+cdh5.0.0+39 CDH5 HBase 0.96.1.1+cdh5.0.0+23 CDH5 HCatalog 0.12.0+cdh5.0.0+265 CDH5 Hive 0.12.0+cdh5.0.0+265 CDH5 Hue 3.5.0+cdh5.0.0+186 CDH5 Impala 1.2.3+cdh5.0.0+0 CDH5 Kite (CDH 5 only) 0.10.0+cdh5.0.0+69 CDH5 Llama (CDH 5 only) 1.0.0+cdh5.0.0+0 CDH5 Mahout 0.8+cdh5.0.0+28 CDH5 Oozie 4.0.0+cdh5.0.0+144 CDH5 Parquet 1.2.5+cdh5.0.0+29 CDH5 Pig 0.12.0+cdh5.0.0+20 CDH5 Solr 4.4.0+cdh5.0.0+163 CDH5 spark 0.9.0 CDH5 Sqoop 1.4.4+cdh5.0.0+40 CDH5 Sqoop2 1.99.3+cdh5.0.0+19 CDH5 Whirr 0.8.2+cdh5.0.0+20 CDH5 Zookeeper 3.4.5+cdh5.0.0+27 CDH5 Cloudera Manager Management Daemons 5.0.0-beta-2 Not applicable Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) Not applicable Java 7 java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) Not applicable Cloudera Manager Agent 5.0.0-beta-2 Not applicable
... View more
03-31-2014
06:29 PM
That's true except that Hive never got to recreate it's database, that's a later step that was not completed because of this error. If we could skip the error and move through the resto of the procesess, (18 more beyond the HDFS format component), then that would be great... With the current unprocessed initial setup, the hive user id and postresql was incomplete and now postresql is giving me an invalid user id/password which makes sense since the script never got to that point due to the above error...
... View more
03-31-2014
04:48 PM
What is command 544? Is there a list of commands and pre-requisites for successful completion of same?
... View more
03-31-2014
04:47 PM
Can someone tell me what error code 544 is, how to resolve it and retry? Is there a manual for error codes?
... View more
03-31-2014
04:41 PM
Thanks DIO, that did not work. However, Darren's suggestion of stopping the cluster, delting the cluster and adding the cluster allowed me to select the base hadoop services. Now however during format of HDFS I receive the following: Waiting for ZooKeeper Service to initialize Finished waiting Details Starting ZooKeeper Service Completed 1/1 steps successfully Details Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty. Command (544) has failed So I need to format HDFS. I suppose it is not empty? how do I empty it?
... View more
03-31-2014
01:04 PM
Hi Darren, stopping the cluster, deleting it and readding the cluster seemed to work letting me add the parcels back in and starting most of the services. However, Hive failed to start do to a userid / password issue as follows: Failed initialising database. Unable to open a test connection to the given database. JDBC url = jdbc:postgresql://hadoopmngr:7432/hive1, username = hive1. Terminating connection pool. Original Exception: ------ org.postgresql.util.PSQLException: FATAL: password authentication failed for user "hive1"
... View more
03-31-2014
08:31 AM
Here's the hive connectivity informaiton. At this point it appears that all I'm missing is for the manager to list the services I think...
... View more
03-31-2014
08:22 AM
03-31-2014
08:21 AM
- « Previous
-
- 1
- 2
- Next »