Member since
09-07-2014
25
Posts
2
Kudos Received
0
Solutions
01-08-2017
08:04 PM
Killing the process(es) helped. Thanks!
... View more
01-08-2017
07:51 PM
waiting for server to shut down........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... failed
20170108:19:26:51:311785 hawq_stop:datanode01-dev:gpadmin-[INFO]:-pg_ctl: server does not shut down
20170108:19:26:51:311785 hawq_stop:datanode01-dev:gpadmin-[ERROR]:-Segment stop failed, exit
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Prepare to do 'hawq stop'
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-You can find log in:
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_stop_20170108.log
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-GPHOME is set to:
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-/usr/local/hawq/.
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[DEBUG]:-Current user is 'gpadmin'
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[DEBUG]:-Parsing config file:
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[DEBUG]:-/usr/local/hawq/./etc/hawq-site.xml
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Stop hawq with args: ['stop', 'segment']
20170108:19:36:55:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Stop hawq segment
waiting for server to shut down........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... failed
20170108:19:46:56:313713 hawq_stop:datanode01-dev:gpadmin-[INFO]:-pg_ctl: server does not shut down
20170108:19:46:56:313713 hawq_stop:datanode01-dev:gpadmin-[ERROR]:-Segment stop failed, exit
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Prepare to do 'hawq stop'
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-You can find log in:
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_stop_20170108.log
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-GPHOME is set to:
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-/usr/local/hawq/.
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[DEBUG]:-Current user is 'gpadmin'
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[DEBUG]:-Parsing config file:
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[DEBUG]:-/usr/local/hawq/./etc/hawq-site.xml
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Stop hawq with args: ['stop', 'segment']
20170108:19:48:12:314792 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Stop hawq segment
waiting for server to shut down..........................................20170108:19:48:51:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Prepare to do 'hawq stop'
20170108:19:48:51:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-You can find log in:
20170108:19:48:51:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-/home/gpadmin/hawqAdminLogs/hawq_stop_20170108.log
20170108:19:48:51:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-GPHOME is set to:
20170108:19:48:51:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-/usr/local/hawq/.
20170108:19:48:51:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Stop hawq with args: ['stop', 'segment']
.......20170108:19:48:58:314926 hawq_stop:datanode01-dev:gpadmin-[INFO]:-Stop hawq segment
waiting for server to shut down........................................................................................................................................................................................[root@datanode01-dev ~]#
... View more
01-08-2017
07:46 PM
I am trying to restart the HAWQ service using Ambari. However, for some reason the segments are not Stopping and the operation is timing out after 600 seconds. Is there a way to stop the segments and/or restart them from the terminal? I am using HDP 2.4.2 and HAWQ 2.0.1
... View more
Labels:
12-12-2016
07:41 AM
1 Kudo
I intend to have a separate NiFi cluster than the HDF cluster. If I want to use the embedded zookeeper option for state management, do I need to manually install & start zookeeper on the nodes?
... View more
Labels:
12-10-2016
06:36 PM
ambari 2.2.2 While installing through UI, I see only 'Default' as the option.
... View more
12-10-2016
04:29 PM
manually as in? I have tried modifying the repo files in the new node. However, ambari overwrites them when installing Datanode etc.
... View more
12-10-2016
01:32 AM
For some reason, HDP.repo and HDP_UTILS.repo files have baseurl=http://ambariserverFQDN/yum/HDP... These are downloaded automatically while using ambari to add a node. How to change that to regular hortonworks repo? Even the ambari.repo had same issue but I changed that in the ambari server node. Now the new ambari.repo file is downloaded to the new node which is being added.
... View more
Labels:
11-12-2016
12:06 AM
For me this solved the problem. I was initially doing the cloud.cfg change on all the HAWQ hosts but ignoring one of the master nodes as it was not supposed to have HAWQ components. However, I had configured for using the YARN scheduler for which that node was master. This dependency was creating the above mentioned problem.
... View more
04-07-2016
08:20 PM
Yes, I have tried that and it didn't help.
... View more
04-07-2016
07:55 PM
[Moving original question from Hadoop Core track to Cloud & Operations] I created an image using the .ova file from HDP Sandbox 2.4 for
virtual box. I am able to access the publicip:8888 link but not the
8080. Upon checking the ambari-server logs, I get the following error: 05 Apr 2016 19:41:04,011 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:04,841 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:05,659 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:06,488 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:07,295 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:08,082 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:08,912 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:09,729 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:27,110 INFO [main] AmbariServer:173 - Found org/apache/ambari/server/controller/AmbariServer.class class in file:/usr/lib/ambari-server/ambari-server-2.2.1.0.161.jar!/org/apache/ambari/server/controller/AmbariServer.class
05 Apr 2016 19:41:27,163 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:27,171 INFO [main] Configuration:1076 - Hosts Mapping File null
05 Apr 2016 19:41:27,171 INFO [main] HostsMap:60 - Using hostsmap file null
05 Apr 2016 19:41:27,587 INFO [main] ControllerModule:193 - Detected POSTGRES as the database type from the JDBC URL
05 Apr 2016 19:41:29,042 INFO [main] ControllerModule:578 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.EmailDispatcher
05 Apr 2016 19:41:29,092 INFO [main] ControllerModule:578 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.SNMPDispatcher
05 Apr 2016 19:41:29,094 INFO [main] ControllerModule:578 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.AlertScriptDispatcher
05 Apr 2016 19:42:34,009 ERROR [main] DBAccessorImpl:102 - Error while creating database accessor
org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:207)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:138)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:410)
at org.postgresql.Driver.connect(Driver.java:280)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.ambari.server.orm.DBAccessorImpl.<init>(DBAccessorImpl.java:83)
at org.apache.ambari.server.orm.DBAccessorImpl$FastClassByGuice$86dbc63e.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.InjectionRequestProcessor$StaticInjection$1.call(InjectionRequestProcessor.java:116)
at com.google.inject.internal.InjectionRequestProcessor$StaticInjection$1.call(InjectionRequestProcessor.java:110)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectionRequestProcessor$StaticInjection.injectMembers(InjectionRequestProcessor.java:110)
at com.google.inject.internal.InjectionRequestProcessor.injectMembers(InjectionRequestProcessor.java:78)
at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:170)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:803)
Caused by: java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at org.postgresql.core.PGStream.<init>(PGStream.java:60)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:101)
... 47 more
I notice, postgres processes related to ambari are not started: ps -f -u postgres
UID PID PPID C STIME TTY TIME CMD
postgres 5414 1 0 19:36 ? 00:00:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres 5416 5414 0 19:36 ? 00:00:00 postgres: logger process
postgres 5418 5414 0 19:36 ? 00:00:00 postgres: writer process
postgres 5419 5414 0 19:36 ? 00:00:00 postgres: wal writer process
lsof -n -u postgres |grep LISTEN
postmaste 1532 postgres 3u IPv4 20222 0t0 TCP *:postgres (LISTEN)
postmaste 1532 postgres 4u IPv6 20223 0t0 TCP *:postgres (LISTEN)
How to resolve this error?
... View more
04-07-2016
03:33 AM
I am using the .ova file available from the Hortonworks website and trying to create an AMI on AWS EC2.
... View more
04-06-2016
07:12 PM
everything is default. I just took the ova file to create an image(AMI) and create an EC2 instance from it.
(edited to clarify AMI)
... View more
04-05-2016
09:48 PM
1 Kudo
Yes, in fact I also tried ambari-server stop
ambari-server setup
ambari-server start
... View more
04-05-2016
08:41 PM
I created an image using the .ova file from HDP Sandbox 2.4 for virtual box. I am able to access the publicip:8888 link but not the 8080. Upon checking the ambari-server logs, I get the following error: 05 Apr 2016 19:41:04,011 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:04,841 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:05,659 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:06,488 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:07,295 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:08,082 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:08,912 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:09,729 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:27,110 INFO [main] AmbariServer:173 - Found org/apache/ambari/server/controller/AmbariServer.class class in file:/usr/lib/ambari-server/ambari-server-2.2.1.0.161.jar!/org/apache/ambari/server/controller/AmbariServer.class
05 Apr 2016 19:41:27,163 INFO [main] Configuration:746 - Reading password from existing file
05 Apr 2016 19:41:27,171 INFO [main] Configuration:1076 - Hosts Mapping File null
05 Apr 2016 19:41:27,171 INFO [main] HostsMap:60 - Using hostsmap file null
05 Apr 2016 19:41:27,587 INFO [main] ControllerModule:193 - Detected POSTGRES as the database type from the JDBC URL
05 Apr 2016 19:41:29,042 INFO [main] ControllerModule:578 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.EmailDispatcher
05 Apr 2016 19:41:29,092 INFO [main] ControllerModule:578 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.SNMPDispatcher
05 Apr 2016 19:41:29,094 INFO [main] ControllerModule:578 - Binding and registering notification dispatcher class org.apache.ambari.server.notifications.dispatchers.AlertScriptDispatcher
05 Apr 2016 19:42:34,009 ERROR [main] DBAccessorImpl:102 - Error while creating database accessor
org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:207)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:138)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:410)
at org.postgresql.Driver.connect(Driver.java:280)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.ambari.server.orm.DBAccessorImpl.<init>(DBAccessorImpl.java:83)
at org.apache.ambari.server.orm.DBAccessorImpl$$FastClassByGuice$$86dbc63e.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:60)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.Scopes$1$1.get(Scopes.java:65)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
at com.google.inject.internal.InjectionRequestProcessor$StaticInjection$1.call(InjectionRequestProcessor.java:116)
at com.google.inject.internal.InjectionRequestProcessor$StaticInjection$1.call(InjectionRequestProcessor.java:110)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
at com.google.inject.internal.InjectionRequestProcessor$StaticInjection.injectMembers(InjectionRequestProcessor.java:110)
at com.google.inject.internal.InjectionRequestProcessor.injectMembers(InjectionRequestProcessor.java:78)
at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:170)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
at com.google.inject.Guice.createInjector(Guice.java:95)
at com.google.inject.Guice.createInjector(Guice.java:72)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.ambari.server.controller.AmbariServer.main(AmbariServer.java:803)
Caused by: java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at org.postgresql.core.PGStream.<init>(PGStream.java:60)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:101)
... 47 more
I notice, postgres processes related to ambari are not started: ps -f -u postgres
UID PID PPID C STIME TTY TIME CMD
postgres 5414 1 0 19:36 ? 00:00:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres 5416 5414 0 19:36 ? 00:00:00 postgres: logger process
postgres 5418 5414 0 19:36 ? 00:00:00 postgres: writer process
postgres 5419 5414 0 19:36 ? 00:00:00 postgres: wal writer process
lsof -n -u postgres |grep LISTEN
postmaste 1532 postgres 3u IPv4 20222 0t0 TCP *:postgres (LISTEN)
postmaste 1532 postgres 4u IPv6 20223 0t0 TCP *:postgres (LISTEN)
How to resolve this error?
... View more
09-16-2014
06:13 PM
I have installed CDH5.1.2 Cloudera VM on virtual box and am aiming to integrate and use mongodb with it. I read on the mongodb page that they support only till CDH4. Does that mean I have to use CDH4 only? Are there any resources available that can help me regarding that integration?
... View more
09-09-2014
10:57 AM
@Manikumar - Thanks for replying.This is my terminal: [Piyush@redhatcdh4_1 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:0C:29:BE:52:B7
TYPE=Ethernet
UUID=8bd6e426-9bad-4f0c-a098-9345e1d51bfa
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=192.168.31.217
NETMASK=255.255.255.0
BROADCAST=192.168.1.255
[Piyush@redhatcdh4_1 ~]$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.31.2
nameserver 8.8.8.8
# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
#
# DNS1=xxx.xxx.xxx.xxx
# DNS2=xxx.xxx.xxx.xxx
# DOMAIN=lab.foo.com bar.foo.com
[Piyush@redhatcdh4_1 ~]$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=redhatcdh4_1
GATEWAY=192.168.31.2
NTPSERVERARGS=iburst
[Piyush@redhatcdh4_1 ~]$ /etc/init.d/network/restart
bash: /etc/init.d/network/restart: Not a directory
[Piyush@redhatcdh4_1 ~]$ /etc/init.d/network restart
[Piyush@redhatcdh4_1 ~]$ ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:444 errors:0 dropped:0 overruns:0 frame:0
TX packets:444 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:35256 (34.4 KiB) TX bytes:35256 (34.4 KiB)
I do not have any network (like eth0) on my RedHat. Hence, unable to connect. Can you point out what is wrong?
... View more
09-09-2014
10:36 AM
Should the broadcast and nameserver value be same? You have tha same for Redhat and not for Ubuntu. Hence asking.
... View more
09-08-2014
09:39 AM
I am facing problems when the dynamic ip is changing and am unable to work around with Cloudera Manager any more. Some people have suggested using Static IP. I tried that the other day but was unable to properly set it up. Can someone, please, list out the steps to be followed for setting up a static ip to be used by Cloudera Manager? I am using RedHat 6.5 64-bit on VMWare. It will also be helpful if you can mention if something needs to be done if I already have CDH4 installed for a certain hostname and ip-address.
... View more
09-08-2014
09:32 AM
@Manikumar : Can you please list out the steps needed to set up a static ip. I tried doing that in another system, but somehow ended up not being able to use the internet anymore. It would be helpful if someone using Cloudera can list the necessary steps to set up Static ip.
... View more
09-08-2014
12:10 AM
The problem seems to be Unable to issue query: request to the Service Monitor timed out
Internal error while querying the Service Monitor The problem is, CM is unable to determine the health. Except some warnings (like minimum 3 nodes suggested, 1GB for each node) I don't see any major health issues. The screenshot is like this.
... View more
09-07-2014
11:32 PM
I did jps and following is my output. Shall I kill some of the processes? [root@rhcdh ~]# /usr/java/jdk1.6.0_31/bin/jps
4783 JobTracker
4978 Bootstrap
5251 Bootstrap
5114 AlertPublisher
4511 HMaster
5135 Main
5085 Main
5154 EventCatcherService
4137 QuorumPeerMain
3667 Main
4174 NameNode
5190 NavigatorMain
5666
4200 SecondaryNameNode
5416 RunJar
5653
5059 Main
5989 RunJar
4184 DataNode
5175 HeadlampServer
4459 HRegionServer
7370 Jps
4755 TaskTracker
... View more
09-07-2014
11:28 PM
Yes, I already did that. After doing that it showed all the services are being Started. However, only some of the services viz. Hive, Hue, Oozie, Sqoop are now being shown with Green Health Status. The status of other services is still unknown. Is there any other method to check if the services are running? Let me try jps in the terminal.
... View more
09-07-2014
11:03 PM
All the services were successfully started in case of the 'First Run'. However, Since my RedHat6.5 in VMWare got too slow, I had to forcefully shut down and restart. After that I restarted my Cloudera Manager. However, I am unable to start any services after that. It is trying to start Zookeeper first, and the start command is failing with error due to timeout after 150 seconds. What might be my possible messups?
... View more