Support Questions
Find answers, ask questions, and share your expertise

Error during service installation


First in order to test the database login connectivity during installation path A, the server name was missing, (i.e. blank:7432 instead of servername:7432) and needed to be added.  Once it oast the connection /. login id pwd verrification test, I clicked continue and received the following error:



A server error has occurred. Send the following information to Cloudera.


Version: Cloudera Enterprise Data Hub Trial 5.0.0-beta-2 (#119 built by jenkins on 20140209-0301 git: 8acd3c5559ccf82bf374d49bbf00bce58dee286e) 

java.lang.NullPointerException:Service must not be null to generate role name
 at line 208
 in checkNotNull()

 Stack Trace:
1. line 208
 in checkNotNull()
2. line 85
 in com.cloudera.server.cmf.DbRoleNameGenerator generate()
3. line 406
 in com.cloudera.server.cmf.cluster.RulesCluster assignRole()
4. line 587
 in buildCustomCluster()
5. line 521
 in buildCluster()
6. line 492
 in buildAccs()
7. line 433
 in handleReview()
8. line 405
 in renderReviewStep()
9. <generated> line -1
 in$$FastClassByCGLIB$$71adc282 invoke()
10. line 191
 in net.sf.cglib.proxy.MethodProxy invoke()
11. line 688
 in org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation invokeJoinpoint()
12. line 150
 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed()
13. line 61
 in invoke()
14. line 172
 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed()
15. line 621
 in org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor intercept()
16. <generated> line -1
 in$$EnhancerByCGLIB$$5048f4b renderReviewStep()
17. line -2
 in sun.reflect.NativeMethodAccessorImpl invoke0()
18. line 57
 in sun.reflect.NativeMethodAccessorImpl invoke()
19. line 43
 in sun.reflect.DelegatingMethodAccessorImpl invoke()
20. line 606
 in java.lang.reflect.Method invoke()
21. line 176
 in invokeHandlerMethod()
22. line 436
 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter invokeHandlerMethod()
23. line 424
 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter handle()
24. line 790
 in org.springframework.web.servlet.DispatcherServlet doDispatch()
25. line 719
 in org.springframework.web.servlet.DispatcherServlet doService()
26. line 669
 in org.springframework.web.servlet.FrameworkServlet processRequest()
27. line 574
 in org.springframework.web.servlet.FrameworkServlet doGet()
28. line 707
 in javax.servlet.http.HttpServlet service()
29. line 820
 in javax.servlet.http.HttpServlet service()
30. line 511
 in org.mortbay.jetty.servlet.ServletHolder handle()
31. line 1221
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
32. line 78
 in org.mortbay.servlet.UserAgentFilter doFilter()
33. line 131
 in org.mortbay.servlet.GzipFilter doFilter()
34. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
35. line 48
 in com.jamonapi.http.JAMonServletFilter doFilter()
36. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
37. line 109
 in com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter doFilter()
38. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
39. line 311
 in$VirtualFilterChain doFilter()
40. line 116
 in invoke()
41. line 83
 in doFilter()
42. line 323
 in$VirtualFilterChain doFilter()
43. line 113
 in doFilter()
44. line 323
 in$VirtualFilterChain doFilter()
45. line 101
 in doFilter()
46. line 323
 in$VirtualFilterChain doFilter()
47. line 113
 in doFilter()
48. line 323
 in$VirtualFilterChain doFilter()
49. line 146
 in doFilter()
50. line 323
 in$VirtualFilterChain doFilter()
51. line 54
 in doFilter()
52. line 323
 in$VirtualFilterChain doFilter()
53. line 45
 in doFilter()
54. line 323
 in$VirtualFilterChain doFilter()
55. line 182
 in doFilter()
56. line 323
 in$VirtualFilterChain doFilter()
57. line 105
 in doFilter()
58. line 323
 in$VirtualFilterChain doFilter()
59. line 87
 in doFilter()
60. line 323
 in$VirtualFilterChain doFilter()
61. line 125
 in doFilter()
62. line 323
 in$VirtualFilterChain doFilter()
63. line 173
 in doFilter()
64. line 237
 in org.springframework.web.filter.DelegatingFilterProxy invokeDelegate()
65. line 167
 in org.springframework.web.filter.DelegatingFilterProxy doFilter()
66. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
67. line 88
 in org.springframework.web.filter.CharacterEncodingFilter doFilterInternal()
68. line 76
 in org.springframework.web.filter.OncePerRequestFilter doFilter()
69. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
70. line 399
 in org.mortbay.jetty.servlet.ServletHandler handle()
71. line 216
 in handle()
72. line 182
 in org.mortbay.jetty.servlet.SessionHandler handle()
73. line 216
 in handle()
74. line 766
 in org.mortbay.jetty.handler.ContextHandler handle()
75. line 450
 in org.mortbay.jetty.webapp.WebAppContext handle()
76. line 152
 in org.mortbay.jetty.handler.HandlerWrapper handle()
77. line 53
 in org.mortbay.jetty.handler.StatisticsHandler handle()
78. line 152
 in org.mortbay.jetty.handler.HandlerWrapper handle()
79. line 326
 in org.mortbay.jetty.Server handle()
80. line 542
 in org.mortbay.jetty.HttpConnection handleRequest()
81. line 928
 in org.mortbay.jetty.HttpConnection$RequestHandler headerComplete()
82. line 549
 in org.mortbay.jetty.HttpParser parseNext()
83. line 212
 in org.mortbay.jetty.HttpParser parseAvailable()
84. line 404
 in org.mortbay.jetty.HttpConnection handle()
85. line 410
 in run()
86. line 582
 in org.mortbay.thread.QueuedThreadPool$PoolThread run()



Can someone tell me what error code 544 is, how to resolve it and retry?  Is there a manual for error codes?


What is command 544?  Is there a list of commands and pre-requisites for successful completion of same?

Since you re-created a cluster on the same hosts, HDFS was already formatted from your previous run. Your overall workflow should have succeeded and you can ignore this error. If all your services started, then you're good to go.

544 is the command ID in the CM database, not an error code. It's basically for debugging.


That's true except that Hive never got to recreate it's database, that's a later step that was not completed because of this error.  If we could skip the error and move through the resto of the procesess, (18 more beyond the HDFS format component), then that would be great...


With the current unprocessed initial setup, the hive user id and postresql was incomplete and now postresql is giving me an invalid user id/password which makes sense since the script never got to that point due to the above error...


Inspect hosts for correctness
 Run Again 





  Inspector ran on all 4 hosts. 
  Individual hosts resolved their own hostnames correctly. 
  No errors were found while looking for conflicting init scripts. 
  No errors were found while checking /etc/hosts. 
  All hosts resolved localhost to 
  All hosts checked resolved each other's hostnames correctly and in a timely manner. 
  Host clocks are approximately in sync (within ten minutes). 
  Host time zones are consistent across the cluster. 
  No users or groups are missing. 
  No kernel versions that are known to be bad are running. 
  All hosts have /proc/sys/vm/swappiness set to 0. 
  No performance concerns with Transparent Huge Pages settings. 
   0 hosts are running CDH 4 and 4 hosts are running CDH5. 
  All checked hosts in each cluster are running the same version of components. 
  All managed hosts have consistent versions of Java. 
  All checked Cloudera Management Daemons versions are consistent with the server. 
  All checked Cloudera Management Agents versions are consistent with the server. 

Version Summary



Cluster 1 — CDH 5 


hadoop0, hadoop1, hadoop2, hadoopmngr 



CDH Version

Bigtop-Tomcat (CDH 5 only) 0.7.0+cdh5.0.0+0 CDH5
Crunch (CDH 5 only) 0.9.0+cdh5.0.0+19 CDH5
Flume NG 1.4.0+cdh5.0.0+90 CDH5
MapReduce 1 2.2.0+cdh5.0.0+1610 CDH5
HDFS 2.2.0+cdh5.0.0+1610 CDH5
HttpFS 2.2.0+cdh5.0.0+1610 CDH5
MapReduce 2 2.2.0+cdh5.0.0+1610 CDH5
YARN 2.2.0+cdh5.0.0+1610 CDH5
Hadoop 2.2.0+cdh5.0.0+1610 CDH5
Lily HBase Indexer 1.3+cdh5.0.0+39 CDH5
HBase CDH5
HCatalog 0.12.0+cdh5.0.0+265 CDH5
Hive 0.12.0+cdh5.0.0+265 CDH5
Hue 3.5.0+cdh5.0.0+186 CDH5
Impala 1.2.3+cdh5.0.0+0 CDH5
Kite (CDH 5 only) 0.10.0+cdh5.0.0+69 CDH5
Llama (CDH 5 only) 1.0.0+cdh5.0.0+0 CDH5
Mahout 0.8+cdh5.0.0+28 CDH5
Oozie 4.0.0+cdh5.0.0+144 CDH5
Parquet 1.2.5+cdh5.0.0+29 CDH5
Pig 0.12.0+cdh5.0.0+20 CDH5
Solr 4.4.0+cdh5.0.0+163 CDH5
spark 0.9.0 CDH5
Sqoop 1.4.4+cdh5.0.0+40 CDH5
Sqoop2 1.99.3+cdh5.0.0+19 CDH5
Whirr 0.8.2+cdh5.0.0+20 CDH5
Zookeeper 3.4.5+cdh5.0.0+27 CDH5
Cloudera Manager Management Daemons 5.0.0-beta-2 Not applicable
Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)  Not applicable
Java 7 java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7.0_25-b15) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)  Not applicable
Cloudera Manager Agent 5.0.0-beta-2 Not applicable


 Skipped. Will create database in later step


 Basically DIO if you see what is happening, because the format HDFS does not complete, the hive database setup step is not executed either, then when we try to start hive, we get a postgresql access denied error.






Started at

Ended at

  First Run     Finished  Mar 31, 2014 9:35:56 PM EDT  Mar 31, 2014 9:36:55 PM EDT 

Failed to perform First Run of services.

Command Progress

Completed 3 of 20 steps.


Waiting for ZooKeeper Service to initialize
Finished waiting
Starting ZooKeeper Service
Completed 1/1 steps successfully
Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty.
Command (563) has failed
Starting HDFS Service
Creating HDFS /tmp directory
Creating MR2 job history directory
Creating NodeManager remote application log directory
Starting YARN (MR2 Included) Service
Creating Hive Metastore Database
Creating Hive Metastore Database Tables
Creating Hive user directory
Creating Hive warehouse directory
Starting Hive Service
Creating Oozie database
Installing Oozie ShareLib in HDFS
Starting Oozie Service
Creating Sqoop user directory
Starting Sqoop Service
Starting Hue Service
Deploying Client Configuration

Assuming there's nothing valuable in your NameNode, try deleting your NameNode data directories and retrying your first run. There should be a retry button on the page that says First Run. You can find the namenode data directories on the configuration page in the wizard, or by clicking on HDFS and viewing the configuration.

It may also help to see the stdout and stderr log of the NameNode format command, which you can find by clicking on HDFS, then commands.


I'll try that and reply.  Thanks!


okay.. progress... I've now gotten past the HDFS format by removing /dfs/nn on the manager but now I have 0 datanodes started because of the following error on each node:


10:03:06.781 PM FATAL org.apache.hadoop.hdfs.server.datanode.DataNode Initialization failed for block pool Block pool <registering> (Datanode Uuid unassigned) service to hadoopmngr/ Incompatible clusterIDs in /dfs/dn: namenode clusterID = cluster111; datanode clusterID = cluster6