Created on 03-18-2014 10:24 PM - edited 09-16-2022 01:55 AM
First in order to test the database login connectivity during installation path A, the server name was missing, (i.e. blank:7432 instead of servername:7432) and needed to be added. Once it oast the connection /. login id pwd verrification test, I clicked continue and received the following error:
Error
A server error has occurred. Send the following information to Cloudera.
Path: http://192.168.0.102:7180/cmf/clusters/1/express-add-services/review
Version: Cloudera Enterprise Data Hub Trial 5.0.0-beta-2 (#119 built by jenkins on 20140209-0301 git: 8acd3c5559ccf82bf374d49bbf00bce58dee286e)
java.lang.NullPointerException:Service must not be null to generate role name
at Preconditions.java line 208
in com.google.common.base.Preconditions checkNotNull()
Stack Trace:
1. Preconditions.java line 208
in com.google.common.base.Preconditions checkNotNull()
2. DbRoleNameGenerator.java line 85
in com.cloudera.server.cmf.DbRoleNameGenerator generate()
3. RulesCluster.java line 406
in com.cloudera.server.cmf.cluster.RulesCluster assignRole()
4. ExpressAddServicesWizardController.java line 587
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController buildCustomCluster()
5. ExpressAddServicesWizardController.java line 521
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController buildCluster()
6. ExpressAddServicesWizardController.java line 492
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController buildAccs()
7. ExpressAddServicesWizardController.java line 433
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController handleReview()
8. ExpressAddServicesWizardController.java line 405
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController renderReviewStep()
9. <generated> line -1
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController$$FastClassByCGLIB$$71adc282 invoke()
10. MethodProxy.java line 191
in net.sf.cglib.proxy.MethodProxy invoke()
11. Cglib2AopProxy.java line 688
in org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation invokeJoinpoint()
12. ReflectiveMethodInvocation.java line 150
in org.springframework.aop.framework.ReflectiveMethodInvocation proceed()
13. MethodSecurityInterceptor.java line 61
in org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor invoke()
14. ReflectiveMethodInvocation.java line 172
in org.springframework.aop.framework.ReflectiveMethodInvocation proceed()
15. Cglib2AopProxy.java line 621
in org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor intercept()
16. <generated> line -1
in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController$$EnhancerByCGLIB$$5048f4b renderReviewStep()
17. NativeMethodAccessorImpl.java line -2
in sun.reflect.NativeMethodAccessorImpl invoke0()
18. NativeMethodAccessorImpl.java line 57
in sun.reflect.NativeMethodAccessorImpl invoke()
19. DelegatingMethodAccessorImpl.java line 43
in sun.reflect.DelegatingMethodAccessorImpl invoke()
20. Method.java line 606
in java.lang.reflect.Method invoke()
21. HandlerMethodInvoker.java line 176
in org.springframework.web.bind.annotation.support.HandlerMethodInvoker invokeHandlerMethod()
22. AnnotationMethodHandlerAdapter.java line 436
in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter invokeHandlerMethod()
23. AnnotationMethodHandlerAdapter.java line 424
in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter handle()
24. DispatcherServlet.java line 790
in org.springframework.web.servlet.DispatcherServlet doDispatch()
25. DispatcherServlet.java line 719
in org.springframework.web.servlet.DispatcherServlet doService()
26. FrameworkServlet.java line 669
in org.springframework.web.servlet.FrameworkServlet processRequest()
27. FrameworkServlet.java line 574
in org.springframework.web.servlet.FrameworkServlet doGet()
28. HttpServlet.java line 707
in javax.servlet.http.HttpServlet service()
29. HttpServlet.java line 820
in javax.servlet.http.HttpServlet service()
30. ServletHolder.java line 511
in org.mortbay.jetty.servlet.ServletHolder handle()
31. ServletHandler.java line 1221
in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
32. UserAgentFilter.java line 78
in org.mortbay.servlet.UserAgentFilter doFilter()
33. GzipFilter.java line 131
in org.mortbay.servlet.GzipFilter doFilter()
34. ServletHandler.java line 1212
in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
35. JAMonServletFilter.java line 48
in com.jamonapi.http.JAMonServletFilter doFilter()
36. ServletHandler.java line 1212
in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
37. JavaMelodyFacade.java line 109
in com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter doFilter()
38. ServletHandler.java line 1212
in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
39. FilterChainProxy.java line 311
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
40. FilterSecurityInterceptor.java line 116
in org.springframework.security.web.access.intercept.FilterSecurityInterceptor invoke()
41. FilterSecurityInterceptor.java line 83
in org.springframework.security.web.access.intercept.FilterSecurityInterceptor doFilter()
42. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
43. ExceptionTranslationFilter.java line 113
in org.springframework.security.web.access.ExceptionTranslationFilter doFilter()
44. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
45. SessionManagementFilter.java line 101
in org.springframework.security.web.session.SessionManagementFilter doFilter()
46. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
47. AnonymousAuthenticationFilter.java line 113
in org.springframework.security.web.authentication.AnonymousAuthenticationFilter doFilter()
48. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
49. RememberMeAuthenticationFilter.java line 146
in org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter doFilter()
50. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
51. SecurityContextHolderAwareRequestFilter.java line 54
in org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter doFilter()
52. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
53. RequestCacheAwareFilter.java line 45
in org.springframework.security.web.savedrequest.RequestCacheAwareFilter doFilter()
54. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
55. AbstractAuthenticationProcessingFilter.java line 182
in org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter doFilter()
56. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
57. LogoutFilter.java line 105
in org.springframework.security.web.authentication.logout.LogoutFilter doFilter()
58. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
59. SecurityContextPersistenceFilter.java line 87
in org.springframework.security.web.context.SecurityContextPersistenceFilter doFilter()
60. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
61. ConcurrentSessionFilter.java line 125
in org.springframework.security.web.session.ConcurrentSessionFilter doFilter()
62. FilterChainProxy.java line 323
in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter()
63. FilterChainProxy.java line 173
in org.springframework.security.web.FilterChainProxy doFilter()
64. DelegatingFilterProxy.java line 237
in org.springframework.web.filter.DelegatingFilterProxy invokeDelegate()
65. DelegatingFilterProxy.java line 167
in org.springframework.web.filter.DelegatingFilterProxy doFilter()
66. ServletHandler.java line 1212
in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
67. CharacterEncodingFilter.java line 88
in org.springframework.web.filter.CharacterEncodingFilter doFilterInternal()
68. OncePerRequestFilter.java line 76
in org.springframework.web.filter.OncePerRequestFilter doFilter()
69. ServletHandler.java line 1212
in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter()
70. ServletHandler.java line 399
in org.mortbay.jetty.servlet.ServletHandler handle()
71. SecurityHandler.java line 216
in org.mortbay.jetty.security.SecurityHandler handle()
72. SessionHandler.java line 182
in org.mortbay.jetty.servlet.SessionHandler handle()
73. SecurityHandler.java line 216
in org.mortbay.jetty.security.SecurityHandler handle()
74. ContextHandler.java line 766
in org.mortbay.jetty.handler.ContextHandler handle()
75. WebAppContext.java line 450
in org.mortbay.jetty.webapp.WebAppContext handle()
76. HandlerWrapper.java line 152
in org.mortbay.jetty.handler.HandlerWrapper handle()
77. StatisticsHandler.java line 53
in org.mortbay.jetty.handler.StatisticsHandler handle()
78. HandlerWrapper.java line 152
in org.mortbay.jetty.handler.HandlerWrapper handle()
79. Server.java line 326
in org.mortbay.jetty.Server handle()
80. HttpConnection.java line 542
in org.mortbay.jetty.HttpConnection handleRequest()
81. HttpConnection.java line 928
in org.mortbay.jetty.HttpConnection$RequestHandler headerComplete()
82. HttpParser.java line 549
in org.mortbay.jetty.HttpParser parseNext()
83. HttpParser.java line 212
in org.mortbay.jetty.HttpParser parseAvailable()
84. HttpConnection.java line 404
in org.mortbay.jetty.HttpConnection handle()
85. SelectChannelEndPoint.java line 410
in org.mortbay.io.nio.SelectChannelEndPoint run()
86. QueuedThreadPool.java line 582
in org.mortbay.thread.QueuedThreadPool$PoolThread run()
Created 03-31-2014 07:33 PM
retrying.. after deleting everything in /dfs by:
cd /dfs
rm -r -f *
on each host including the CDh manager server...
This actually worked!!! thanks Darren, Thanks DIO.. here are the steps when you need to reinstall from a botched delete of hive or zookeeper and the service add gives you a null pointer exception, (see my prior posts).
1) Stop the cluster
2) delete the cluster
3) cd /dfs on each host
4) rm -r -f *
5) using the manager add the cluster
6) select the hosts tab and select alll hosts
7) choose all the defaults and click continue to start firstrun()
results:
Command Progress
Completed 20 of 20 steps.
Waiting for ZooKeeper Service to initialize Finished waiting | |
Starting ZooKeeper Service Completed 1/1 steps successfully | |
Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty. Sucessfully formatted NameNode. | |
Starting HDFS Service Successfully started HDFS service | |
Creating HDFS /tmp directory HDFS directory /tmp already exists. | |
Creating MR2 job history directory Successfully created MR2 job history directory. | |
Creating NodeManager remote application log directory Successfully created NodeManager remote application log directory. | |
Starting YARN (MR2 Included) Service Successfully started service | |
Creating Hive Metastore Database Created Hive Metastore Database. | |
Creating Hive Metastore Database Tables Created Hive Metastore Database Tables successfully. | |
Creating Hive user directory Successfully created Hive user directory. | |
Creating Hive warehouse directory Successfully created Hive warehouse directory. | |
Starting Hive Service Service started successfully. | |
Creating Oozie database Oozie database created successfully. | |
Installing Oozie ShareLib in HDFS Successfully installed Oozie ShareLib | |
Starting Oozie Service Service started successfully. | |
Creating Sqoop user directory Successfully created Sqoop user directory. | |
Starting Sqoop Service Service started successfully. | |
Starting Hue Service Service started successfully. |
Created 03-19-2014 09:06 AM
Hi,
Usually when the database test is missing the server name, that's because there was a problem detecting the server name when your embedded postgresql database was first started (ie hostname returned an error / empty string). We've seen this happen especially when using a VPN on AWS machines.
The fix is to modify /etc/cloudera-scm-server/db.mgmt.properties to add your fully qualified domain name before :7432 wherever it appears in that file. Then, restart the CM server (sudo service cloudera-scm-server restart).
Thanks,
Darren
Created 03-19-2014 10:53 AM
Thanks Darren, this makes sense for two reasons. First in the prior screen in the test database section the host name prior to :7432 was missing, and I had to add it by temporarily selecting the custom database radio buttun, inserting the host name prior to the :7432 and then selecting the embedded database radio button to complete the test connection prior to continuing. I should have figured if the test screen didn't have the host, the installation screen would not either... 🙂
The only place I put the hostname is in the /etc/hosts file so the names resolve in all of the servers in the cluster, but ithe host names are not in any domain name server. And of course the host name is in /proc/sys/kernel/hostname as the same name I put into the /etc/hosts file..
I'll go ahead and add it to the db management properties file.
Thanks!
Carl
Created 03-30-2014 10:34 AM
Once I installed the manager on a single server and then HDFS on all four servers as well as the YARN service, I thenk tried to install zookeeper and now I receive the following error:
A server error has occurred. Send the following information to Cloudera.
Path: http://192.168.0.102:7180/cmf/clusters/1/add-service/review
Version: Cloudera Enterprise Data Hub Trial 5.0.0-beta-2 (#119 built by jenkins on 20140209-0301 git: 8acd3c5559ccf82bf374d49bbf00bce58dee286e)
This is annoying.it looks like zookeeper is trying to install the hive manager and the hive gateway services again...
Any help would be greatful.
Thanks!
Carl
Created 03-30-2014 12:00 PM
Okay new question... I manually installed hive and zookeeper. now how do I see these services in the CDH manager?
[root@hadoopmngr carl]# ls
cloudera-cdh-5-0.x86_64.rpm
[root@hadoopmngr carl]# yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
Loaded plugins: fastestmirror, security
Setting up Local Package Process
Examining cloudera-cdh-5-0.x86_64.rpm: cloudera-cdh-5-0.x86_64
Marking cloudera-cdh-5-0.x86_64.rpm to be installed
Loading mirror speeds from cached hostfile
* base: ftp.linux.ncsu.edu
* extras: mirror.symnds.com
* updates: mirror.linux.duke.edu
Resolving Dependencies
--> Running transaction check
---> Package cloudera-cdh.x86_64 0:5-0 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================================================================================================
Package Arch Version Repository Size
===================================================================================================================================================================
Installing:
cloudera-cdh x86_64 5-0 /cloudera-cdh-5-0.x86_64 13 k
Transaction Summary
===================================================================================================================================================================
Install 1 Package(s)
Total size: 13 k
Installed size: 13 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : cloudera-cdh-5-0.x86_64 1/1
Verifying : cloudera-cdh-5-0.x86_64 1/1
Installed:
cloudera-cdh.x86_64 0:5-0
Complete!
[root@hadoopmngr carl]# yum install hive
Loaded plugins: fastestmirror, security
Loading mirror speeds from cached hostfile
* base: ftp.linux.ncsu.edu
* extras: mirror.symnds.com
* updates: mirror.linux.duke.edu
cloudera-cdh5 | 951 B 00:00
cloudera-cdh5/primary | 40 kB 00:00
cloudera-cdh5 140/140
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package hive.noarch 0:0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6 will be installed
--> Processing Dependency: hive-jdbc = 0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6 for package: hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch
--> Processing Dependency: bigtop-utils >= 0.7 for package: hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch
--> Processing Dependency: avro-libs for package: hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch
--> Processing Dependency: zookeeper for package: hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch
--> Processing Dependency: hadoop-client for package: hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch
--> Processing Dependency: parquet for package: hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch
--> Running transaction check
---> Package avro-libs.noarch 0:1.7.5+cdh5.0.0+8-0.cdh5b2.p0.30.el6 will be installed
---> Package bigtop-utils.noarch 0:0.7.0+cdh5.0.0+0-0.cdh5b2.p0.30.el6 will be installed
---> Package hadoop-client.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 will be installed
--> Processing Dependency: hadoop-mapreduce = 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 for package: hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
--> Processing Dependency: hadoop-yarn = 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 for package: hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
--> Processing Dependency: hadoop-0.20-mapreduce = 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 for package: hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
--> Processing Dependency: hadoop = 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 for package: hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
--> Processing Dependency: hadoop-hdfs = 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 for package: hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
---> Package hive-jdbc.noarch 0:0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6 will be installed
---> Package parquet.noarch 0:1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6 will be installed
--> Processing Dependency: parquet-format for package: parquet-1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6.noarch
---> Package zookeeper.x86_64 0:3.4.5+cdh5.0.0+27-0.cdh5b2.p0.29.el6 will be installed
--> Running transaction check
---> Package hadoop.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 will be installed
--> Processing Dependency: nc for package: hadoop-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
---> Package hadoop-0.20-mapreduce.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 will be installed
---> Package hadoop-hdfs.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 will be installed
--> Processing Dependency: bigtop-jsvc for package: hadoop-hdfs-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64
---> Package hadoop-mapreduce.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 will be installed
---> Package hadoop-yarn.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 will be installed
---> Package parquet-format.noarch 0:1.0.0-1.cdh5b2.p0.31.el6 will be installed
--> Running transaction check
---> Package bigtop-jsvc.x86_64 0:0.6.0+cdh5.0.0+389-0.cdh5b2.p0.25.el6 will be installed
---> Package nc.x86_64 0:1.84-22.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================
Installing:
hive noarch 0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6 cloudera-cdh5 22 M
Installing for dependencies:
avro-libs noarch 1.7.5+cdh5.0.0+8-0.cdh5b2.p0.30.el6 cloudera-cdh5 12 M
bigtop-jsvc x86_64 0.6.0+cdh5.0.0+389-0.cdh5b2.p0.25.el6 cloudera-cdh5 27 k
bigtop-utils noarch 0.7.0+cdh5.0.0+0-0.cdh5b2.p0.30.el6 cloudera-cdh5 8.8 k
hadoop x86_64 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 cloudera-cdh5 19 M
hadoop-0.20-mapreduce x86_64 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 cloudera-cdh5 28 M
hadoop-client x86_64 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 cloudera-cdh5 25 k
hadoop-hdfs x86_64 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 cloudera-cdh5 14 M
hadoop-mapreduce x86_64 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 cloudera-cdh5 25 M
hadoop-yarn x86_64 2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 cloudera-cdh5 13 M
hive-jdbc noarch 0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6 cloudera-cdh5 15 M
nc x86_64 1.84-22.el6 base 57 k
parquet noarch 1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6 cloudera-cdh5 8.7 M
parquet-format noarch 1.0.0-1.cdh5b2.p0.31.el6 cloudera-cdh5 395 k
zookeeper x86_64 3.4.5+cdh5.0.0+27-0.cdh5b2.p0.29.el6 cloudera-cdh5 3.7 M
Transaction Summary
=================================================================================================================================================================================
Install 15 Package(s)
Total download size: 160 M
Installed size: 200 M
Is this ok [y/N]: y
Downloading Packages:
(1/15): avro-libs-1.7.5+cdh5.0.0+8-0.cdh5b2.p0.30.el6.noarch.rpm | 12 MB 00:05
(2/15): bigtop-jsvc-0.6.0+cdh5.0.0+389-0.cdh5b2.p0.25.el6.x86_64.rpm | 27 kB 00:00
(3/15): bigtop-utils-0.7.0+cdh5.0.0+0-0.cdh5b2.p0.30.el6.noarch.rpm | 8.8 kB 00:00
(4/15): hadoop-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64.rpm | 19 MB 00:13
(5/15): hadoop-0.20-mapreduce-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64.rpm | 28 MB 00:21
(6/15): hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64.rpm | 25 kB 00:00
(7/15): hadoop-hdfs-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64.rpm | 14 MB 00:08
(8/15): hadoop-mapreduce-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64.rpm | 25 MB 00:16
(9/15): hadoop-yarn-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64.rpm | 13 MB 00:07
(10/15): hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch.rpm | 22 MB 00:12
(11/15): hive-jdbc-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch.rpm | 15 MB 00:06
(12/15): nc-1.84-22.el6.x86_64.rpm | 57 kB 00:00
(13/15): parquet-1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6.noarch.rpm | 8.7 MB 00:06
(14/15): parquet-format-1.0.0-1.cdh5b2.p0.31.el6.noarch.rpm | 395 kB 00:00
(15/15): zookeeper-3.4.5+cdh5.0.0+27-0.cdh5b2.p0.29.el6.x86_64.rpm | 3.7 MB 00:03
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 1.5 MB/s | 160 MB 01:43
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : avro-libs-1.7.5+cdh5.0.0+8-0.cdh5b2.p0.30.el6.noarch 1/15
Installing : bigtop-utils-0.7.0+cdh5.0.0+0-0.cdh5b2.p0.30.el6.noarch 2/15
Installing : zookeeper-3.4.5+cdh5.0.0+27-0.cdh5b2.p0.29.el6.x86_64 3/15
Installing : bigtop-jsvc-0.6.0+cdh5.0.0+389-0.cdh5b2.p0.25.el6.x86_64 4/15
Installing : nc-1.84-22.el6.x86_64 5/15
Installing : parquet-1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6.noarch 6/15
Installing : hadoop-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 7/15
Installing : parquet-format-1.0.0-1.cdh5b2.p0.31.el6.noarch 8/15
Installing : hadoop-hdfs-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 9/15
Installing : hadoop-yarn-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 10/15
Installing : hadoop-mapreduce-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 11/15
Installing : hadoop-0.20-mapreduce-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 12/15
Installing : hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 13/15
Installing : hive-jdbc-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch 14/15
Installing : hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch 15/15
Verifying : bigtop-jsvc-0.6.0+cdh5.0.0+389-0.cdh5b2.p0.25.el6.x86_64 1/15
Verifying : hadoop-hdfs-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 2/15
Verifying : avro-libs-1.7.5+cdh5.0.0+8-0.cdh5b2.p0.30.el6.noarch 3/15
Verifying : parquet-1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6.noarch 4/15
Verifying : hive-jdbc-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch 5/15
Verifying : hadoop-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 6/15
Verifying : hadoop-0.20-mapreduce-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 7/15
Verifying : zookeeper-3.4.5+cdh5.0.0+27-0.cdh5b2.p0.29.el6.x86_64 8/15
Verifying : hadoop-mapreduce-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 9/15
Verifying : hadoop-yarn-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 10/15
Verifying : hadoop-client-2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6.x86_64 11/15
Verifying : hive-0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6.noarch 12/15
Verifying : parquet-format-1.0.0-1.cdh5b2.p0.31.el6.noarch 13/15
Verifying : nc-1.84-22.el6.x86_64 14/15
Verifying : bigtop-utils-0.7.0+cdh5.0.0+0-0.cdh5b2.p0.30.el6.noarch 15/15
Installed:
hive.noarch 0:0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6
Dependency Installed:
avro-libs.noarch 0:1.7.5+cdh5.0.0+8-0.cdh5b2.p0.30.el6 bigtop-jsvc.x86_64 0:0.6.0+cdh5.0.0+389-0.cdh5b2.p0.25.el6
bigtop-utils.noarch 0:0.7.0+cdh5.0.0+0-0.cdh5b2.p0.30.el6 hadoop.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6
hadoop-0.20-mapreduce.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 hadoop-client.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6
hadoop-hdfs.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 hadoop-mapreduce.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6
hadoop-yarn.x86_64 0:2.2.0+cdh5.0.0+1610-0.cdh5b2.p0.51.el6 hive-jdbc.noarch 0:0.12.0+cdh5.0.0+265-0.cdh5b2.p0.33.el6
nc.x86_64 0:1.84-22.el6 parquet.noarch 0:1.2.5+cdh5.0.0+29-0.cdh5b2.p0.20.el6
parquet-format.noarch 0:1.0.0-1.cdh5b2.p0.31.el6 zookeeper.x86_64 0:3.4.5+cdh5.0.0+27-0.cdh5b2.p0.29.el6
Complete!
[root@hadoopmngr carl]# service cloudera-scm-server restart
Stopping cloudera-scm-server: [ OK ]
Starting cloudera-scm-server: [ OK ]
Created 03-30-2014 09:18 PM
Hi,
It's not clear to me exactly what you were doing when you hit an error. Here's what I think you did, with questions where I'm not sure:
1) Installed CM and CDH binaries on all hosts
- Did you use packages or parcels to install CDH? I'm guessing parcels, otherwise your later CDH install by packages wouldn't have done anything.
2) Installed HDFS and YARN on the cluster (at the same time)
3) Did NOT complete the wizard. HDFS, YARN, and the Mangement service were NOT started
4) Hit the back button and tried to add ZooKeeper as well, then hit the problem in AddServiceWizardController2.
5) Manually installed cdh binaries. You should not have done this, since it will conflict with whatever you did in step 1 for CDH binaries. If my assumptions are correct, you now have CDH installed as both parcels and packages, which will cause problems. You'll need to yum erase all of the packages you installed.
Keep in mind that installing binaries is completely separate from configuring services.
Assuming I'm correct, then probably step 3 and 4 left your cluster in a partially completed state. You can click on the Cloudera Manager logo to get to the home page, delete your cluster from the dropdown menu to the right of the cluster name, then click the dropdown menu near the word Status and select the Add Cluster option to start over again. If you have both parcels and packages installed on that cluster, you'll need to manualy uninstall your packages.
To configure a service to run on an existing cluster, click the dropdown menu to the right of the cluster name and select Add Service. This will let you configure things like ZooKeeper or Hive to run on an existing cluster.
Thanks,
Darren
Created 03-31-2014 04:31 AM
Here's what I did.
1) installed the .biin file (path A) onto the manager server
2) used the web interface on http://192.168.0.102:7143 to add only HDFS to the manager and 3 other hosts
3) Added Yarn to the manager and 3 other hosts
4) Added Hive to the manager and 3 hosts all of the above steps done through the drop down add services
5) TRIED to add zookeeper, after the postgresql connection test passed I clicked continue and receive the stack exception
Created 03-31-2014 01:04 PM
Hi Darren, stopping the cluster, deleting it and readding the cluster seemed to work letting me add the parcels back in and starting most of the services. However, Hive failed to start do to a userid / password issue as follows:
Failed initialising database.
Unable to open a test connection to the given database. JDBC url = jdbc:postgresql://hadoopmngr:7432/hive1, username = hive1. Terminating connection pool. Original Exception: ------
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "hive1"
Created 03-31-2014 02:05 PM
Created 03-31-2014 04:41 PM
Thanks DIO, that did not work. However, Darren's suggestion of stopping the cluster, delting the cluster and adding the cluster allowed me to select the base hadoop services.
Now however during format of HDFS I receive the following:
Waiting for ZooKeeper Service to initialize
Finished waiting
Details
Starting ZooKeeper Service
Completed 1/1 steps successfully
Details
Checking if the name directories of the NameNode are empty. Formatting HDFS only if empty.
Command (544) has failed
So I need to format HDFS. I suppose it is not empty? how do I empty it?