Member since
07-11-2017
42
Posts
1
Kudos Received
0
Solutions
02-25-2019
01:31 PM
Hi, We are trying to upgrade our cluster from CDH 5.9 to CDH 6.1 We use sqoop2 in 5.9 version but as sqoop2 is deprecated in CDH 6.1 , how can we migrate our sqoop2 metastore data to the upgrade CDH 6.1 Did anyone try this out? It would be of great help if anyone can throw some light on this? Thanks
... View more
02-19-2019
02:44 PM
Hi Our queries running for more than 1 minute through JDBC connection are failing with query state as session closed and error on the application as "ThriftTransportException" or "SQLException Cant retrieve next row" It would be great if someone can throw some light on this? We are using CDH 5.9 Thanks
... View more
Labels:
02-19-2019
02:32 PM
Hi We are facing the same issue, our queries running for more than 1 minute through JDBC connection are failing with query state as session closed and error on the application as "ThriftTransportException" or "SQLException Cant retrieve next row" Did you figure out the fix for this ? It would be great if you can throw some light on this? Thanks
... View more
08-06-2018
08:25 PM
Hi, Did anyone try to connect to Cloudera Hbase for data insertion /updation from local Windows Rstudio? Can anyone of you provide me the steps or documentation to follow for this? Thanks, Renuka
... View more
Labels:
06-15-2018
08:15 AM
Hi @manuroman, We are getting this error sometimes and sometimes this doesnt show up for the same query.What might be the issue , any idea? Thanks, Renuka
... View more
06-13-2018
11:35 AM
Hi, Our cluster is complaining about the datanode directly space . It shows the below error on cloudera manager Bad : The following DataNode Data Directory are on filesystems with less than 5.0 GiB of their space free. /mnt/hdfs/7/dfs/dn (free: 3.9 GiB (98.27%), capacity: 4.0 GiB) When we checked the space on the disk only 1% of space is used up. /dev/s 3.7T 17G 3.7T 1% /mnt/hdfs/7 We are not able to figure out the root cause of it as space is not our issue here. Can anyone throw some light on this? Thanks, Renuka
... View more
Labels:
06-07-2018
11:37 AM
Hi @manuroman These are the settings used in R for connecting to Impala , but still I am getting the same error. library("rJava") .jinit(parameters = c("-Xms6g", "-Xmx20g")) library("RJDBC") Do I need to change the settings somewhere else or within R? Thanks, Renuka K
... View more
06-07-2018
07:36 AM
Hi, When we are querying from dashboard to Impala through a JDBC connection , the query is failing for the first run which can be seen from the Cloudera Manager Impala queries page and Impala automatically retries the query and returns data on the second run to the dashboard. Due to this the dashboard is becoming slow as it is taking twice the time of the query.Could anyone throw some light for the root cause of this and a solution for it? Thanks, Renuka
... View more
Labels:
06-07-2018
07:30 AM
Hi, When we query Impala using JDBC connection from R or dashboard , it is throwing this error quite often. Error message: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:489) at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:476) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:225) at info.urbanek.Rpackage.RJDBC.JDBCResultPull.fetch(JDBCResultPull.java:77) Error in .jcall(rp, "I", "fetch", stride, block) :
java.sql.SQLException: Error retrieving next row Sometimes it works fine and other times it is throwing the ThriftTransport exception. Did anyone face this issue , could anyone help me with the root cause of it and what needs to be done in this case? Thanks, Renuka
... View more
06-07-2018
07:23 AM
Hi, No I didnot proceed , did it work for you? Thanks
... View more
01-25-2018
01:03 PM
Hi We are using simple authentication on cloudera cluster. We want to have security to the hive tables and data , for that we want to try sentry. Is it advised to use sentry at this point.If yes what are the effects of using sentry without kerberos? When we use sentry. what happens to already exsiting tables and databases? Any suggestions? Did anyone try doing this before? Thanks, Renuka K
... View more
Labels:
10-16-2017
07:24 AM
Hi, My destination cluster had edge nodes so the export is trying to connect to the edge node on port 8020 and not the name node as name node hdfs port is 8020 Does this export works when I try to connect to edge node and not name node as the name node can be connected only from edge node. Thanks, Renuka
... View more
10-11-2017
02:33 PM
Hi, I have created a snapshot of the hbase table and clone it but when I am trying to export to another cluster I am getting connection refused. I am not running the below command as a hbase user . How can I change to hbase user and will that help in this command execution? hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot tbl_snapshot -copy-to hdfs://<cluster2>:8020/hbase -mappers 16 Error: Exception in thread "main" java.net.ConnectException: Call From cluster1 to cluster2:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731) at org.apache.hadoop.ipc.Client.call(Client.java:1470) at org.apache.hadoop.ipc.Client.call(Client.java:1403) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTr anslatorPB.java:757) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2102) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1214) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1409) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:895) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1024) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1028) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:708) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) at org.apache.hadoop.ipc.Client.call(Client.java:1442) ... 21 more Could anyone help me with this? Thanks
... View more
Labels:
10-05-2017
09:11 AM
Hi, I am getting the below error in Hive when I tried to update and delete. I am using hive version 1.1.0 in CDH 5.12 FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations. I created table and stored in ORC format and no bucketing and had transactional set to true. Please let me know what are the properties that need to be added or modified to make this work? Thanks, Renuka
... View more
Labels:
09-06-2017
12:44 PM
For all jobs , the log looks the same if the job fails or succeeds stdout:
Log Type: stdout
Log Upload Time: Wed Sep 06 20:40:25 +0100 2017
Log Length: 0 stderr Log Type: stderr
Log Upload Time: Wed Sep 06 20:40:25 +0100 2017
Log Length: 2767
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.12.0-1.cdh5.12.0.p0.29/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/mnt/hdfs/7/yarn/nm/filecache/251/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Sep 06, 2017 8:39:40 PM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get
WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information.
Sep 06, 2017 8:39:40 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
Sep 06, 2017 8:39:40 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
Sep 06, 2017 8:39:40 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class
Sep 06, 2017 8:39:40 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
Sep 06, 2017 8:39:40 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
Sep 06, 2017 8:39:40 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
Sep 06, 2017 8:39:41 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
... View more
09-06-2017
12:27 PM
Hi, We tried to upgrade from 5.9 to 5.12 and oozie is not working. Log files are not generated in /var/log/oozie. In the oozie workflow stdout and stderr doesnot show any information.Sqoop job is running in the workflow and hive , shell scirpts are not working . Scripts when run from the hive shell works but not from oozie workflow. Could anyone point me what might be the issue? Thanks, Renuka
... View more
Labels:
09-01-2017
05:44 AM
Hi, Setting those values didnot help. CDH/Hive : hive-common-1.1.0-cdh5.12.0.jar
... View more
08-31-2017
03:28 PM
Hi When I am trying to run the hive query I am getting the below error. Could anyone tell me what the issue can be ? Total jobs = 11 Launching Job 1 out of 11 Number of reduce tasks not specified. Estimated from input data size: 2 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask java.lang.RuntimeException: Error caching map.xml: java.io.InterruptedIOException: Call interrupted at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:757) at org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:692) at org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:684) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:370) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) Caused by: java.io.InterruptedIOException: Call interrupted at org.apache.hadoop.ipc.Client.call(Client.java:1496) at org.apache.hadoop.ipc.Client.call(Client.java:1439) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy17.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:260) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy18.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3113) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3080) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1001) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:997) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:997) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:989) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1970) at org.apache.hadoop.hive.ql.exec.Utilities.setPlanPath(Utilities.java:775) at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:701) ... 7 more Job Submission failed with exception 'java.lang.RuntimeException(Error caching map.xml: java.io.InterruptedIOException: Call interrupted)'
... View more
Labels:
08-29-2017
09:01 AM
Hi I have the same issue . Can you tell how you were able to solve this issue? Thanks, Renuka
... View more
08-15-2017
03:23 PM
Hi We didnot stop the sqoop-metastore server on the node before a reboot and now not able to connect to the sqoop Getting below error when trying to look at the list of sqoop jobs 17/08/15 17:01:48 ERROR tool.JobTool: I/O error performing job operation: java.io.IOException: Exception creating SQL connection at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.init(HsqldbJobStorage.java:216) at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.open(HsqldbJobStorage.java:161) at org.apache.sqoop.tool.JobTool.run(JobTool.java:259) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: java.sql.SQLException: socket creation error at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.<init>(Unknown Source) at org.hsqldb.jdbcDriver.getConnection(Unknown Source) at org.hsqldb.jdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:233) at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.init(HsqldbJobStorage.java:174) ... 8 more I tried to start the metastore server by the below command /usr/bin/sqoop-metastore start But when I run this command and try to look at the jobs I am getting the same error. And in .sqoop/ folder the below 2 files are created. cat shared-metastore.db.log CREATE USER SA PASSWORD "" ADMIN cat shared-metastore.db.properties #HSQL Database Engine 1.8.0.10 #Tue Aug 15 17:00:28 CDT 2017 hsqldb.script_format=0 runtime.gc_interval=0 sql.enforce_strict_size=false hsqldb.cache_size_scale=8 readonly=false hsqldb.nio_data_file=true hsqldb.cache_scale=14 version=1.8.0 hsqldb.default_table_type=memory hsqldb.cache_file_scale=1 hsqldb.log_size=200 modified=yes hsqldb.cache_version=1.7.0 hsqldb.original_version=1.8.0 hsqldb.compatible_version=1.8.0 I see that in /tmp/ folder I couldnt find the /sqoop-metastore/shared.db folders (default path for metastore files) Can anyone know the steps to follow to connect to the sqoop metastore? Thanks
... View more
Labels:
08-15-2017
03:12 PM
Solved By uninstalling the openJDK files on the nodes where the activation is freezed.
... View more
08-11-2017
06:56 AM
1 Kudo
Hi Hi, I removed 2 nodes from the cluster and tried to add them back from managed hosts and the download , distribute is successful but stuck at the activation. But the 2 nodes got added to the cluster and show up in the hosts but CDH version is NONE.(Host in bad health) Host Agent status: This host is in contact with the Cloudera Manager Server. This host is not in contact with the Host Monitor. Not able to run any services on these nodes and getting connection refusal to all ports except "7180" Error in the cloudera-scm-agent Caught unexpected exception in main loop.
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 758, in start
self._init_after_first_heartbeat_response(resp_data)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 938, in _init_after_first_heartbeat_response
self.client_configs.load()
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 686, in load
new_deployed.update(self._lookup_alternatives(fname))
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 432, in _lookup_alternatives
return self._parse_alternatives(alt_name, out)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 444, in _parse_alternatives
path, _, _, priority_str = line.rstrip().split(" ")
ValueError: too many values to unpack Could you please help me with this?
... View more
08-10-2017
01:57 PM
Hi, I removed 2 nodes from the cluster and tried to add them back from managed hosts and the download , distribute is successful but stuck at the activation. But the 2 nodes got added to the cluster and show up in the hosts but CDH version is NONE.(Host in bad health) Host Agent status: This host is in contact with the Cloudera Manager Server. This host is not in contact with the Host Monitor. Not able to run any services on these nodes and getting connection refusal to all ports except "7180" Error in the cloudera-scm-agent Caught unexpected exception in main loop.
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 758, in start
self._init_after_first_heartbeat_response(resp_data)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 938, in _init_after_first_heartbeat_response
self.client_configs.load()
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 686, in load
new_deployed.update(self._lookup_alternatives(fname))
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 432, in _lookup_alternatives
return self._parse_alternatives(alt_name, out)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 444, in _parse_alternatives
path, _, _, priority_str = line.rstrip().split(" ")
ValueError: too many values to unpack Could anyone help me with steps that need to be followed to solve this issue? Thanks
... View more
Labels:
08-10-2017
07:56 AM
We are facing the same issue. What steps did you follow to install the host again?
... View more
08-09-2017
09:24 PM
The below error is thrown on the node - in logs Caught unexpected exception in main loop. Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 758, in start self._init_after_first_heartbeat_response(resp_data) File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 938, in _init_after_first_heartbeat_response self.client_configs.load() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 686, in load new_deployed.update(self._lookup_alternatives(fname)) File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 432, in _lookup_alternatives return self._parse_alternatives(alt_name, out) File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 444, in _parse_alternatives path, _, _, priority_str = line.rstrip().split(" ") ValueError: too many values to unpack How can this be fixed?
... View more
08-09-2017
09:10 PM
We have 2 edge nodes whose health is bad and services were not running so we removed them from the cluster and tried to add it back as new hosts to the cluster. While doing that it is getting stuck at the activation part but the nodes got added to the cluster. Now we see the CDH version as none and we are not able to start any service on these 2 edge nodes and are showing bad health but we are seing hearbeat and these hosts are not in contact with the host monitor. Could you please help me to get the two nodes working?
... View more
Labels:
08-09-2017
09:07 PM
We have 2 edge nodes whose health is bad and services were not running so we removed them from the cluster and tried to add it back as new hosts to the cluster. While doing that it is getting stuck at the activation part but the nodes got added to the cluster. Now we see the CDH version as none and we are not able to start any service on these 2 edge nodes and are showing bad health but we are seing hearbeat Could you please help me to get the two nodes working?
... View more
Labels:
08-04-2017
08:23 AM
Hi, I am trying to connect to Hbase table using JAVA application that is running on windows machine.I could establish connection to the table using zookeeper thats running on the edge node and get the region server IP address from the .meta table. Then the client application is trying to connect directly to the region server (data node) from windows machine and throwing the below exceptions. Due to edge node we cannot connect from windows to data node directly and also edge node and other nodes (master and data nodes) are in different network , how can this issue be resolved? 17/08/04 09:18:34 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries. at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:356) at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:371) at org.apache.hadoop.util.Shell.<clinit>(Shell.java:364) at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80) at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437) at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67) at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81) at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96) at com.seagate.dars.imagedefectanalysis.dao.HBaseDao.saveFiles2FS(HBaseDao.java:138) at com.seagate.dars.imagedefectanalysis.ImageDefectAnalysis.main(ImageDefectAnalysis.java:40) 17/08/04 09:18:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/08/04 09:18:34 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0xba8d91c connecting to ZooKeeper ensemble=en01.com:2181 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:host.name=NRM-UabcL003.ad.seagate.com 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_112 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.home=C:\Users\abc\AppData\Local\MyEclipse 2017 CI\binary\com.sun.java.jdk8.win32.x86_64_1.8.0.v112\jre 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.class.path=C:\Users\abc\renuka\Workspaces\MyEclipse 2017 CI\ImageDefectAnalysis_original\target\classes;C:\Users\abc\.m2\repository\org\apache\hbase\hbase-client\1.0.0\hbase-client-1.0.0.jar;C:\Users\abc\.m2\repository\org\apache\hbase\hbase-annotations\1.0.0\hbase-annotations-1.0.0.jar;C:\Users\abc\AppData\Local\MyEclipse 2017 CI\binary\com.sun.java.jdk8.win32.x86_64_1.8.0.v112\lib\tools.jar;C:\Users\abc\.m2\repository\org\apache\hbase\hbase-common\1.0.0\hbase-common-1.0.0.jar;C:\Users\abc\.m2\repository\org\apache\hbase\hbase-protocol\1.0.0\hbase-protocol-1.0.0.jar;C:\Users\abc\.m2\repository\commons-codec\commons-codec\1.9\commons-codec-1.9.jar;C:\Users\abc\.m2\repository\commons-io\commons-io\2.4\commons-io-2.4.jar;C:\Users\abc\.m2\repository\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;C:\Users\abc\.m2\repository\commons-logging\commons-logging\1.2\commons-logging-1.2.jar;C:\Users\abc\.m2\repository\com\google\guava\guava\12.0.1\guava-12.0.1.jar;C:\Users\abc\.m2\repository\com\google\protobuf\protobuf-java\2.5.0\protobuf-java-2.5.0.jar;C:\Users\abc\.m2\repository\io\netty\netty-all\4.0.23.Final\netty-all-4.0.23.Final.jar;C:\Users\abc\.m2\repository\org\apache\zookeeper\zookeeper\3.4.6\zookeeper-3.4.6.jar;C:\Users\abc\.m2\repository\org\apache\htrace\htrace-core\3.1.0-incubating\htrace-core-3.1.0-incubating.jar;C:\Users\abc\.m2\repository\org\codehaus\jackson\jackson-mapper-asl\1.8.8\jackson-mapper-asl-1.8.8.jar;C:\Users\abc\.m2\repository\org\jruby\jcodings\jcodings\1.0.8\jcodings-1.0.8.jar;C:\Users\abc\.m2\repository\org\jruby\joni\joni\2.1.2\joni-2.1.2.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-auth\2.5.1\hadoop-auth-2.5.1.jar;C:\Users\abc\.m2\repository\org\apache\httpcomponents\httpclient\4.2.5\httpclient-4.2.5.jar;C:\Users\abc\.m2\repository\org\apache\directory\server\apacheds-kerberos-codec\2.0.0-M15\apacheds-kerberos-codec-2.0.0-M15.jar;C:\Users\abc\.m2\repository\org\apache\directory\server\apacheds-i18n\2.0.0-M15\apacheds-i18n-2.0.0-M15.jar;C:\Users\abc\.m2\repository\org\apache\directory\api\api-asn1-api\1.0.0-M20\api-asn1-api-1.0.0-M20.jar;C:\Users\abc\.m2\repository\org\apache\directory\api\api-util\1.0.0-M20\api-util-1.0.0-M20.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-core\2.5.1\hadoop-mapreduce-client-core-2.5.1.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-yarn-common\2.5.1\hadoop-yarn-common-2.5.1.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-yarn-api\2.5.1\hadoop-yarn-api-2.5.1.jar;C:\Users\abc\.m2\repository\javax\xml\bind\jaxb-api\2.2.2\jaxb-api-2.2.2.jar;C:\Users\abc\.m2\repository\javax\xml\stream\stax-api\1.0-2\stax-api-1.0-2.jar;C:\Users\abc\.m2\repository\io\netty\netty\3.6.2.Final\netty-3.6.2.Final.jar;C:\Users\abc\.m2\repository\com\github\stephenc\findbugs\findbugs-annotations\1.3.9-1\findbugs-annotations-1.3.9-1.jar;C:\Users\abc\.m2\repository\junit\junit\4.11\junit-4.11.jar;C:\Users\abc\.m2\repository\org\hamcrest\hamcrest-core\1.3\hamcrest-core-1.3.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-hdfs\2.7.1\hadoop-hdfs-2.7.1.jar;C:\Users\abc\.m2\repository\org\mortbay\jetty\jetty\6.1.26\jetty-6.1.26.jar;C:\Users\abc\.m2\repository\org\mortbay\jetty\jetty-util\6.1.26\jetty-util-6.1.26.jar;C:\Users\abc\.m2\repository\com\sun\jersey\jersey-core\1.9\jersey-core-1.9.jar;C:\Users\abc\.m2\repository\com\sun\jersey\jersey-server\1.9\jersey-server-1.9.jar;C:\Users\abc\.m2\repository\asm\asm\3.1\asm-3.1.jar;C:\Users\abc\.m2\repository\commons-cli\commons-cli\1.2\commons-cli-1.2.jar;C:\Users\abc\.m2\repository\commons-daemon\commons-daemon\1.0.13\commons-daemon-1.0.13.jar;C:\Users\abc\.m2\repository\log4j\log4j\1.2.17\log4j-1.2.17.jar;C:\Users\abc\.m2\repository\javax\servlet\servlet-api\2.5\servlet-api-2.5.jar;C:\Users\abc\.m2\repository\org\codehaus\jackson\jackson-core-asl\1.9.13\jackson-core-asl-1.9.13.jar;C:\Users\abc\.m2\repository\xmlenc\xmlenc\0.52\xmlenc-0.52.jar;C:\Users\abc\.m2\repository\xerces\xercesImpl\2.9.1\xercesImpl-2.9.1.jar;C:\Users\abc\.m2\repository\xml-apis\xml-apis\1.3.04\xml-apis-1.3.04.jar;C:\Users\abc\.m2\repository\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-common\2.7.1\hadoop-common-2.7.1.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-annotations\2.7.1\hadoop-annotations-2.7.1.jar;C:\Users\abc\.m2\repository\org\apache\commons\commons-math3\3.1.1\commons-math3-3.1.1.jar;C:\Users\abc\.m2\repository\commons-httpclient\commons-httpclient\3.1\commons-httpclient-3.1.jar;C:\Users\abc\.m2\repository\commons-net\commons-net\3.1\commons-net-3.1.jar;C:\Users\abc\.m2\repository\commons-collections\commons-collections\3.2.1\commons-collections-3.2.1.jar;C:\Users\abc\.m2\repository\javax\servlet\jsp\jsp-api\2.1\jsp-api-2.1.jar;C:\Users\abc\.m2\repository\com\sun\jersey\jersey-json\1.9\jersey-json-1.9.jar;C:\Users\abc\.m2\repository\org\codehaus\jettison\jettison\1.1\jettison-1.1.jar;C:\Users\abc\.m2\repository\com\sun\xml\bind\jaxb-impl\2.2.3-1\jaxb-impl-2.2.3-1.jar;C:\Users\abc\.m2\repository\org\codehaus\jackson\jackson-jaxrs\1.8.3\jackson-jaxrs-1.8.3.jar;C:\Users\abc\.m2\repository\org\codehaus\jackson\jackson-xc\1.8.3\jackson-xc-1.8.3.jar;C:\Users\abc\.m2\repository\net\java\dev\jets3t\jets3t\0.9.0\jets3t-0.9.0.jar;C:\Users\abc\.m2\repository\org\apache\httpcomponents\httpcore\4.1.2\httpcore-4.1.2.jar;C:\Users\abc\.m2\repository\com\jamesmurty\utils\java-xmlbuilder\0.4\java-xmlbuilder-0.4.jar;C:\Users\abc\.m2\repository\commons-configuration\commons-configuration\1.6\commons-configuration-1.6.jar;C:\Users\abc\.m2\repository\commons-digester\commons-digester\1.8\commons-digester-1.8.jar;C:\Users\abc\.m2\repository\commons-beanutils\commons-beanutils\1.7.0\commons-beanutils-1.7.0.jar;C:\Users\abc\.m2\repository\commons-beanutils\commons-beanutils-core\1.8.0\commons-beanutils-core-1.8.0.jar;C:\Users\abc\.m2\repository\org\slf4j\slf4j-api\1.7.10\slf4j-api-1.7.10.jar;C:\Users\abc\.m2\repository\org\slf4j\slf4j-log4j12\1.7.10\slf4j-log4j12-1.7.10.jar;C:\Users\abc\.m2\repository\org\apache\avro\avro\1.7.4\avro-1.7.4.jar;C:\Users\abc\.m2\repository\com\thoughtworks\paranamer\paranamer\2.3\paranamer-2.3.jar;C:\Users\abc\.m2\repository\org\xerial\snappy\snappy-java\1.0.4.1\snappy-java-1.0.4.1.jar;C:\Users\abc\.m2\repository\com\google\code\gson\gson\2.2.4\gson-2.2.4.jar;C:\Users\abc\.m2\repository\com\jcraft\jsch\0.1.42\jsch-0.1.42.jar;C:\Users\abc\.m2\repository\org\apache\curator\curator-client\2.7.1\curator-client-2.7.1.jar;C:\Users\abc\.m2\repository\org\apache\curator\curator-recipes\2.7.1\curator-recipes-2.7.1.jar;C:\Users\abc\.m2\repository\org\apache\curator\curator-framework\2.7.1\curator-framework-2.7.1.jar;C:\Users\abc\.m2\repository\com\google\code\findbugs\jsr305\3.0.0\jsr305-3.0.0.jar;C:\Users\abc\.m2\repository\org\apache\commons\commons-compress\1.4.1\commons-compress-1.4.1.jar;C:\Users\abc\.m2\repository\org\tukaani\xz\1.0\xz-1.0.jar;C:\Users\abc\.m2\repository\org\apache\hadoop\hadoop-core\1.2.1\hadoop-core-1.2.1.jar;C:\Users\abc\.m2\repository\org\apache\commons\commons-math\2.1\commons-math-2.1.jar;C:\Users\abc\.m2\repository\tomcat\jasper-runtime\5.5.12\jasper-runtime-5.5.12.jar;C:\Users\abc\.m2\repository\tomcat\jasper-compiler\5.5.12\jasper-compiler-5.5.12.jar;C:\Users\abc\.m2\repository\org\mortbay\jetty\jsp-api-2.1\6.1.14\jsp-api-2.1-6.1.14.jar;C:\Users\abc\.m2\repository\org\mortbay\jetty\servlet-api-2.5\6.1.14\servlet-api-2.5-6.1.14.jar;C:\Users\abc\.m2\repository\org\mortbay\jetty\jsp-2.1\6.1.14\jsp-2.1-6.1.14.jar;C:\Users\abc\.m2\repository\ant\ant\1.6.5\ant-1.6.5.jar;C:\Users\abc\.m2\repository\commons-el\commons-el\1.0\commons-el-1.0.jar;C:\Users\abc\.m2\repository\hsqldb\hsqldb\1.8.0.10\hsqldb-1.8.0.10.jar;C:\Users\abc\.m2\repository\oro\oro\2.0.8\oro-2.0.8.jar;C:\Users\abc\.m2\repository\org\eclipse\jdt\core\3.1.1\core-3.1.1.jar;C:\Users\abc\.m2\repository\com\sun\mail\javax.mail\1.5.1\javax.mail-1.5.1.jar;C:\Users\abc\.m2\repository\javax\activation\activation\1.1\activation-1.1.jar 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.library.path=C:\Users\abc\AppData\Local\MyEclipse 2017 CI\binary\com.sun.java.jdk8.win32.x86_64_1.8.0.v112\bin;C:\windows\Sun\Java\bin;C:\windows\system32;C:\windows;C:\tibco\tibrv\8.2\bin;c:\Program Files (x86)\RSA SecurID Token Common;C:\ProgramData\Oracle\Java\javapath;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files\PuTTY\;C:\Users\abc\AppData\Local\Microsoft\WindowsApps;;. 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=C:\Users\abc\AppData\Local\Temp\1\ 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:os.name=Windows 10 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:os.version=10.0 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:user.name=abc 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:user.home=C:\Users\abc 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Client environment:user.dir=C:\Users\abc\Workspaces\MyEclipse 2017 CI\javacode 17/08/04 09:18:34 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=en01.com:2181 sessionTimeout=90000 watcher=hconnection-0xba8d91c0x0, quorum=en01.com:2181, baseZNode=/hbase 17/08/04 09:18:34 INFO zookeeper.ClientCnxn: Opening socket connection to server en01.com/10.1.20.30:2181. Will not attempt to authenticate using SASL (unknown error) 17/08/04 09:18:34 INFO zookeeper.ClientCnxn: Socket connection established to en01.com/10.1.20.30:2181, initiating session 17/08/04 09:18:34 INFO zookeeper.ClientCnxn: Session establishment complete on server en01.com/10.1.20.30:2181, sessionid = 0x65da444b0180e25, negotiated timeout = 60000 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Fri Aug 04 10:01:04 CDT 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68079: row 'tbl_abc,,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=dn06.com,60020,1501699598837, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:264) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:199) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientSmallReversedScanner.next(ClientSmallReversedScanner.java:145) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1200) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1109) at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:293) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:131) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823) at com.seagate.dars.imagedefectanalysis.dao.HBaseDao.saveFiles2FS(HBaseDao.java:175) at com.seagate.dars.imagedefectanalysis.ImageDefectAnalysis.main(ImageDefectAnalysis.java:40) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68079: row 'tbl_defect_analysis_images,,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=dn06.com,60020,1501699598837, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:275) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=dn06.com/ip:60020] at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:403) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:709) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:880) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:849) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1173) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:31751) at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:176) at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:155) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 6 more
... View more
Labels: