Support Questions
Find answers, ask questions, and share your expertise

Re: Error during service installation

Contributor

Can someone tell me what error code 544 is, how to resolve it and retry?  Is there a manual for error codes?

Re: Error during service installation

Contributor

What is command 544?  Is there a list of commands and pre-requisites for successful completion of same?

Re: Error during service installation

Since you re-created a cluster on the same hosts, HDFS was already formatted from your previous run. Your overall workflow should have succeeded and you can ignore this error. If all your services started, then you're good to go.

544 is the command ID in the CM database, not an error code. It's basically for debugging.

Re: Error during service installation

Contributor

That's true except that Hive never got to recreate it's database, that's a later step that was not completed because of this error.  If we could skip the error and move through the resto of the procesess, (18 more beyond the HDFS format component), then that would be great...

 

With the current unprocessed initial setup, the hive user id and postresql was incomplete and now postresql is giving me an invalid user id/password which makes sense since the script never got to that point due to the above error...

Re: Error during service installation

Contributor

Inspect hosts for correctness
 
 Run Again 
 


 

Validations

 

 

  Inspector ran on all 4 hosts. 
  Individual hosts resolved their own hostnames correctly. 
  No errors were found while looking for conflicting init scripts. 
  No errors were found while checking /etc/hosts. 
  All hosts resolved localhost to 127.0.0.1. 
  All hosts checked resolved each other's hostnames correctly and in a timely manner. 
  Host clocks are approximately in sync (within ten minutes). 
  Host time zones are consistent across the cluster. 
  No users or groups are missing. 
  No kernel versions that are known to be bad are running. 
  All hosts have /proc/sys/vm/swappiness set to 0. 
  No performance concerns with Transparent Huge Pages settings. 
   0 hosts are running CDH 4 and 4 hosts are running CDH5. 
  All checked hosts in each cluster are running the same version of components. 
  All managed hosts have consistent versions of Java. 
  All checked Cloudera Management Daemons versions are consistent with the server. 
  All checked Cloudera Management Agents versions are consistent with the server. 

Version Summary

 

 

Cluster 1 — CDH 5 

Hosts

hadoop0, hadoop1, hadoop2, hadoopmngr 

Component

Version

CDH Version

Bigtop-Tomcat (CDH 5 only) 0.7.0+cdh5.0.0+0 CDH5
Crunch (CDH 5 only) 0.9.0+cdh5.0.0+19 CDH5
Flume NG 1.4.0+cdh5.0.0+90 CDH5
MapReduce 1 2.2.0+cdh5.0.0+1610 CDH5
HDFS 2.2.0+cdh5.0.0+1610 CDH5
HttpFS 2.2.0+cdh5.0.0+1610 CDH5
MapReduce 2 2.2.0+cdh5.0.0+1610 CDH5
YARN 2.2.0+cdh5.0.0+1610 CDH5
Hadoop 2.2.0+cdh5.0.0+1610 CDH5
Lily HBase Indexer 1.3+cdh5.0.0+39 CDH5
HBase 0.96.1.1+cdh5.0.0+23 CDH5
HCatalog 0.12.0+cdh5.0.0+265 CDH5
Hive 0.12.0+cdh5.0.0+265 CDH5
Hue 3.5.0+cdh5.0.0+186 CDH5
Impala 1.2.3+cdh5.0.0+0 CDH5
Kite (CDH 5 only) 0.10.0+cdh5.0.0+69 CDH5
Llama (CDH 5 only) 1.0.0+cdh5.0.0+0 CDH5
Mahout 0.8+cdh5.0.0+28 CDH5
Oozie 4.0.0+cdh5.0.0+144 CDH5
Parquet 1.2.5+cdh5.0.0+29 CDH5
Pig 0.12.0+cdh5.0.0+20 CDH5
Solr 4.4.0+cdh5.0.0+163 CDH5
spark 0.9.0 CDH5
Sqoop 1.4.4+cdh5.0.0+40 CDH5
Sqoop2 1.99.3+cdh5.0.0+19 CDH5
Whirr 0.8.2+cdh5.0.0+20 CDH5
Zookeeper 3.4.5+cdh5.0.0+27 CDH5
Cloudera Manager Management Daemons 5.0.0-beta-2 Not applicable
Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)  Not applicable
Java 7 java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)  Not applicable
Cloudera Manager Agent 5.0.0-beta-2 Not applicable
 

Re: Error during service installation

Contributor

 Skipped. Will create database in later step

Re: Error during service installation

Contributor

 Basically DIO if you see what is happening, because the format HDFS does not complete, the hive database setup step is not executed either, then when we try to start hive, we get a postgresql access denied error.

 
 Progress
 

 


Command

Context

Status

Started at

Ended at


  First Run     Finished  Mar 31, 2014 9:35:56 PM EDT  Mar 31, 2014 9:36:55 PM EDT 

Failed to perform First Run of services.

Command Progress


Completed 3 of 20 steps.

 


 
Waiting for ZooKeeper Service to initialize
Finished waiting
Details
 
 
Starting ZooKeeper Service
Completed 1/1 steps successfully
Details
 
 
Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty.
Command (563) has failed
Details
 
 
Starting HDFS Service
 
 
Creating HDFS /tmp directory
 
 
Creating MR2 job history directory
 
 
Creating NodeManager remote application log directory
 
 
Starting YARN (MR2 Included) Service
 
 
Creating Hive Metastore Database
 
 
Creating Hive Metastore Database Tables
 
 
Creating Hive user directory
 
 
Creating Hive warehouse directory
 
 
Starting Hive Service
 
 
Creating Oozie database
 
 
Installing Oozie ShareLib in HDFS
 
 
Starting Oozie Service
 
 
Creating Sqoop user directory
 
 
Starting Sqoop Service
 
 
Starting Hue Service
 
 
Deploying Client Configuration

Re: Error during service installation

Assuming there's nothing valuable in your NameNode, try deleting your NameNode data directories and retrying your first run. There should be a retry button on the page that says First Run. You can find the namenode data directories on the configuration page in the wizard, or by clicking on HDFS and viewing the configuration.

It may also help to see the stdout and stderr log of the NameNode format command, which you can find by clicking on HDFS, then commands.

Re: Error during service installation

Contributor

I'll try that and reply.  Thanks!

Re: Error during service installation

Contributor

okay.. progress... I've now gotten past the HDFS format by removing /dfs/nn on the manager but now I have 0 datanodes started because of the following error on each node:

 

10:03:06.781 PM FATAL org.apache.hadoop.hdfs.server.datanode.DataNode Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to hadoopmngr/192.168.0.102:8022java.io.IOException: Incompatible clusterIDs in /dfs/dn: namenode clusterID = cluster111; datanode clusterID = cluster6