Support Questions
Find answers, ask questions, and share your expertise

Error during service installation


First in order to test the database login connectivity during installation path A, the server name was missing, (i.e. blank:7432 instead of servername:7432) and needed to be added.  Once it oast the connection /. login id pwd verrification test, I clicked continue and received the following error:



A server error has occurred. Send the following information to Cloudera.


Version: Cloudera Enterprise Data Hub Trial 5.0.0-beta-2 (#119 built by jenkins on 20140209-0301 git: 8acd3c5559ccf82bf374d49bbf00bce58dee286e) 

java.lang.NullPointerException:Service must not be null to generate role name
 at line 208
 in checkNotNull()

 Stack Trace:
1. line 208
 in checkNotNull()
2. line 85
 in com.cloudera.server.cmf.DbRoleNameGenerator generate()
3. line 406
 in com.cloudera.server.cmf.cluster.RulesCluster assignRole()
4. line 587
 in buildCustomCluster()
5. line 521
 in buildCluster()
6. line 492
 in buildAccs()
7. line 433
 in handleReview()
8. line 405
 in renderReviewStep()
9. <generated> line -1
 in$$FastClassByCGLIB$$71adc282 invoke()
10. line 191
 in net.sf.cglib.proxy.MethodProxy invoke()
11. line 688
 in org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation invokeJoinpoint()
12. line 150
 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed()
13. line 61
 in invoke()
14. line 172
 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed()
15. line 621
 in org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor intercept()
16. <generated> line -1
 in$$EnhancerByCGLIB$$5048f4b renderReviewStep()
17. line -2
 in sun.reflect.NativeMethodAccessorImpl invoke0()
18. line 57
 in sun.reflect.NativeMethodAccessorImpl invoke()
19. line 43
 in sun.reflect.DelegatingMethodAccessorImpl invoke()
20. line 606
 in java.lang.reflect.Method invoke()
21. line 176
 in invokeHandlerMethod()
22. line 436
 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter invokeHandlerMethod()
23. line 424
 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter handle()
24. line 790
 in org.springframework.web.servlet.DispatcherServlet doDispatch()
25. line 719
 in org.springframework.web.servlet.DispatcherServlet doService()
26. line 669
 in org.springframework.web.servlet.FrameworkServlet processRequest()
27. line 574
 in org.springframework.web.servlet.FrameworkServlet doGet()
28. line 707
 in javax.servlet.http.HttpServlet service()
29. line 820
 in javax.servlet.http.HttpServlet service()
30. line 511
 in org.mortbay.jetty.servlet.ServletHolder handle()
31. line 1221
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
32. line 78
 in org.mortbay.servlet.UserAgentFilter doFilter()
33. line 131
 in org.mortbay.servlet.GzipFilter doFilter()
34. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
35. line 48
 in com.jamonapi.http.JAMonServletFilter doFilter()
36. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
37. line 109
 in com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter doFilter()
38. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
39. line 311
 in$VirtualFilterChain doFilter()
40. line 116
 in invoke()
41. line 83
 in doFilter()
42. line 323
 in$VirtualFilterChain doFilter()
43. line 113
 in doFilter()
44. line 323
 in$VirtualFilterChain doFilter()
45. line 101
 in doFilter()
46. line 323
 in$VirtualFilterChain doFilter()
47. line 113
 in doFilter()
48. line 323
 in$VirtualFilterChain doFilter()
49. line 146
 in doFilter()
50. line 323
 in$VirtualFilterChain doFilter()
51. line 54
 in doFilter()
52. line 323
 in$VirtualFilterChain doFilter()
53. line 45
 in doFilter()
54. line 323
 in$VirtualFilterChain doFilter()
55. line 182
 in doFilter()
56. line 323
 in$VirtualFilterChain doFilter()
57. line 105
 in doFilter()
58. line 323
 in$VirtualFilterChain doFilter()
59. line 87
 in doFilter()
60. line 323
 in$VirtualFilterChain doFilter()
61. line 125
 in doFilter()
62. line 323
 in$VirtualFilterChain doFilter()
63. line 173
 in doFilter()
64. line 237
 in org.springframework.web.filter.DelegatingFilterProxy invokeDelegate()
65. line 167
 in org.springframework.web.filter.DelegatingFilterProxy doFilter()
66. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
67. line 88
 in org.springframework.web.filter.CharacterEncodingFilter doFilterInternal()
68. line 76
 in org.springframework.web.filter.OncePerRequestFilter doFilter()
69. line 1212
 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
70. line 399
 in org.mortbay.jetty.servlet.ServletHandler handle()
71. line 216
 in handle()
72. line 182
 in org.mortbay.jetty.servlet.SessionHandler handle()
73. line 216
 in handle()
74. line 766
 in org.mortbay.jetty.handler.ContextHandler handle()
75. line 450
 in org.mortbay.jetty.webapp.WebAppContext handle()
76. line 152
 in org.mortbay.jetty.handler.HandlerWrapper handle()
77. line 53
 in org.mortbay.jetty.handler.StatisticsHandler handle()
78. line 152
 in org.mortbay.jetty.handler.HandlerWrapper handle()
79. line 326
 in org.mortbay.jetty.Server handle()
80. line 542
 in org.mortbay.jetty.HttpConnection handleRequest()
81. line 928
 in org.mortbay.jetty.HttpConnection$RequestHandler headerComplete()
82. line 549
 in org.mortbay.jetty.HttpParser parseNext()
83. line 212
 in org.mortbay.jetty.HttpParser parseAvailable()
84. line 404
 in org.mortbay.jetty.HttpConnection handle()
85. line 410
 in run()
86. line 582
 in org.mortbay.thread.QueuedThreadPool$PoolThread run()



retrying.. after deleting everything in /dfs by:


cd /dfs

rm -r -f *


on each host including the CDh manager server...


This actually worked!!! thanks Darren, Thanks DIO.. here are the steps when you need to reinstall from a botched delete of hive or zookeeper and the service add gives you a null pointer exception, (see my prior posts).


1)  Stop the cluster

2) delete the cluster

3) cd /dfs on each host

4) rm -r -f *

5) using the manager add the cluster

6) select the hosts tab and select alll hosts

7) choose all the defaults and click continue to start firstrun()




Command Progress

Completed 20 of 20 steps.


Waiting for ZooKeeper Service to initialize

Finished waiting



Starting ZooKeeper Service

Completed 1/1 steps successfully



Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty.

Sucessfully formatted NameNode.



Starting HDFS Service

Successfully started HDFS service



Creating HDFS /tmp directory

HDFS directory /tmp already exists.



Creating MR2 job history directory

Successfully created MR2 job history directory.



Creating NodeManager remote application log directory

Successfully created NodeManager remote application log directory.



Starting YARN (MR2 Included) Service

Successfully started service



Creating Hive Metastore Database

Created Hive Metastore Database.



Creating Hive Metastore Database Tables

Created Hive Metastore Database Tables successfully.



Creating Hive user directory

Successfully created Hive user directory.



Creating Hive warehouse directory

Successfully created Hive warehouse directory.



Starting Hive Service

Service started successfully.



Creating Oozie database

Oozie database created successfully.



Installing Oozie ShareLib in HDFS

Successfully installed Oozie ShareLib



Starting Oozie Service

Service started successfully.



Creating Sqoop user directory

Successfully created Sqoop user directory.



Starting Sqoop Service

Service started successfully.



Starting Hue Service

Service started successfully.




View solution in original post




Glad you got it working!


Thanks Dio, the only minor issue on hiveserver2 was that it kept trying to restart because of the following error: Permission denied: user=hive, access=WRITE, inode="/tmp":hdfs:supergroup:drwxr-xr-x


Which clearly is a permission issue quickly solved with a chmod of the hadoop file system


[root@hadoopmngr tmp]# sudo -u hive hadoop fs -ls /tmp
Found 2 items
drwxr-xr-x   - hdfs   supergroup          0 2014-04-01 15:50 /tmp/.cloudera_health_monitoring_canary_files
drwxrwxrwt   - mapred hadoop              0 2014-03-31 22:20 /tmp/logs


[root@hadoopmngr tmp]# sudo -u hdfs hadoop fs -chmod -R 777 /
[root@hadoopmngr tmp]# sudo -u hive hadoop fs -ls /tmp
Found 2 items
drwxrwxrwx   - hdfs   supergroup          0 2014-04-01 15:55 /tmp/.cloudera_health_monitoring_canary_files
drwxrwxrwt   - mapred hadoop              0 2014-03-31 22:20 /tmp/logs


And t hen the hive server2 started just fine.


Thanks again for your help.



Setting your entire HDFS to 777 perms is not recommended. You just needed to make sure there was a 777 /tmp directory. CM normally creates this for you as part of HDFS setup, and there's a command in HDFS to create the /tmp dir if you need to run it manually.


It's not really convenient to undo this, so if you're fine with having effectively no hdfs permissions, then you can keep things as is. Otherwise you can manually re-restrict permissions to each folder appropriately. Anybody else hitting this issue should just create /tmp instead of the recursive chmod.


You are correct.  I only noticed this after the command.  Should have restricted to the /tmp. 


I'm not concerned as this is a research environment with no restricted data.


Thanks dlo,




The solution worked for me, and my issue was due to not completing the Cloudera Manager install in the one window. I thought that I could restart from where I left off, but had nothing but issues.

I had tried to install Hive again and also Hue, but still had issues, so i suspect that there may have been issues prior to these, and there were dependencies.


The solution appears to be a good way to clean out and rebuild a cluster, for probably multiple type issues, and it does not really take that long to complete. 


Express Cluster Installation - Add Hosts


























































Express Cluster Installation - Add Services


Choose the CDH 5 services that you want to install on your cluster.


Choose a combination of services to install.
 Core Hadoop

HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, and Sqoop
 Core with Real-Time Delivery

HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and HBase

 Core with Real-Time Query

HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and Impala

 Core with Real-Time Search

HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and Solr

 Core with Spark

HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, and Spark

 All Services

HDFS, YARN (Includes MapReduce 2), ZooKeeper, Oozie, Hive, Hue, Sqoop, HBase, Impala, Solr, Spark, and Keystore Indexer

 Custom Services

Choose your own services. Services required by chosen services will automatically be included. Note that Flume can be added after your initial cluster has been set up.












 1 2 3





Database Setup

On this page you configure and test database connections. If using custom databases, create the databases first according to the Installing and Configuring an External Database section of the Installation Guide.


Use Custom DatabasesUse Embedded Database

When using the Embedded Database, passwords are auto generated. Please copy them down.



Database Host Name:   Database Type: Database Name :   Username:   Password:
hadoopmngr:7432   MySQLPostgreSQLOracle   hive1   hive1   dAPyUZE0mt 


 Skipped. Will create database in later step