That's true except that Hive never got to recreate it's database, that's a later step that was not completed because of this error. If we could skip the error and move through the resto of the procesess, (18 more beyond the HDFS format component), then that would be great...
With the current unprocessed initial setup, the hive user id and postresql was incomplete and now postresql is giving me an invalid user id/password which makes sense since the script never got to that point due to the above error...
Inspect hosts for correctness
Inspector ran on all 4 hosts.
Individual hosts resolved their own hostnames correctly.
No errors were found while looking for conflicting init scripts.
No errors were found while checking /etc/hosts.
All hosts resolved localhost to 127.0.0.1.
All hosts checked resolved each other's hostnames correctly and in a timely manner.
Host clocks are approximately in sync (within ten minutes).
Host time zones are consistent across the cluster.
No users or groups are missing.
No kernel versions that are known to be bad are running.
All hosts have /proc/sys/vm/swappiness set to 0.
No performance concerns with Transparent Huge Pages settings.
0 hosts are running CDH 4 and 4 hosts are running CDH5.
All checked hosts in each cluster are running the same version of components.
All managed hosts have consistent versions of Java.
All checked Cloudera Management Daemons versions are consistent with the server.
All checked Cloudera Management Agents versions are consistent with the server.
Cluster 1 — CDH 5
hadoop0, hadoop1, hadoop2, hadoopmngr
Bigtop-Tomcat (CDH 5 only) 0.7.0+cdh5.0.0+0 CDH5
Crunch (CDH 5 only) 0.9.0+cdh5.0.0+19 CDH5
Flume NG 1.4.0+cdh5.0.0+90 CDH5
MapReduce 1 2.2.0+cdh5.0.0+1610 CDH5
HDFS 2.2.0+cdh5.0.0+1610 CDH5
HttpFS 2.2.0+cdh5.0.0+1610 CDH5
MapReduce 2 2.2.0+cdh5.0.0+1610 CDH5
YARN 2.2.0+cdh5.0.0+1610 CDH5
Hadoop 2.2.0+cdh5.0.0+1610 CDH5
Lily HBase Indexer 1.3+cdh5.0.0+39 CDH5
HBase 0.96.1.1+cdh5.0.0+23 CDH5
HCatalog 0.12.0+cdh5.0.0+265 CDH5
Hive 0.12.0+cdh5.0.0+265 CDH5
Hue 3.5.0+cdh5.0.0+186 CDH5
Impala 1.2.3+cdh5.0.0+0 CDH5
Kite (CDH 5 only) 0.10.0+cdh5.0.0+69 CDH5
Llama (CDH 5 only) 1.0.0+cdh5.0.0+0 CDH5
Mahout 0.8+cdh5.0.0+28 CDH5
Oozie 4.0.0+cdh5.0.0+144 CDH5
Parquet 1.2.5+cdh5.0.0+29 CDH5
Pig 0.12.0+cdh5.0.0+20 CDH5
Solr 4.4.0+cdh5.0.0+163 CDH5
spark 0.9.0 CDH5
Sqoop 1.4.4+cdh5.0.0+40 CDH5
Sqoop2 1.99.3+cdh5.0.0+19 CDH5
Whirr 0.8.2+cdh5.0.0+20 CDH5
Zookeeper 3.4.5+cdh5.0.0+27 CDH5
Cloudera Manager Management Daemons 5.0.0-beta-2 Not applicable
Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) Not applicable
Java 7 java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) Not applicable
Cloudera Manager Agent 5.0.0-beta-2 Not applicable
Basically DIO if you see what is happening, because the format HDFS does not complete, the hive database setup step is not executed either, then when we try to start hive, we get a postgresql access denied error.
First Run Finished Mar 31, 2014 9:35:56 PM EDT Mar 31, 2014 9:36:55 PM EDT
Failed to perform First Run of services.
Completed 3 of 20 steps.
Waiting for ZooKeeper Service to initialize
Starting ZooKeeper Service
Completed 1/1 steps successfully
Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty.
Command (563) has failed
Starting HDFS Service
Creating HDFS /tmp directory
Creating MR2 job history directory
Creating NodeManager remote application log directory
Starting YARN (MR2 Included) Service
Creating Hive Metastore Database
Creating Hive Metastore Database Tables
Creating Hive user directory
Creating Hive warehouse directory
Starting Hive Service
Creating Oozie database
Installing Oozie ShareLib in HDFS
Starting Oozie Service
Creating Sqoop user directory
Starting Sqoop Service
Starting Hue Service
Deploying Client Configuration
okay.. progress... I've now gotten past the HDFS format by removing /dfs/nn on the manager but now I have 0 datanodes started because of the following error on each node:
10:03:06.781 PM FATAL org.apache.hadoop.hdfs.server.datanode.DataNode Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to hadoopmngr/192.168.0.102:8022java.io.IOException: Incompatible clusterIDs in /dfs/dn: namenode clusterID = cluster111; datanode clusterID = cluster6