Member since
08-07-2017
144
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2292 | 03-05-2019 12:48 AM | |
9443 | 11-06-2017 07:28 PM |
10-16-2017
10:47 PM
Thanks weichiu for the help.
... View more
10-16-2017
03:55 AM
I don;t have source file though, do I need to restore it again? Thanks, Priya
... View more
10-16-2017
01:28 AM
Hi weichiu, Thanks for the reply. Does that mean I need to take backup of files to har file again or not? Got to know that there are files with .tsb in two folders that I am archiving. Please suggest. Thanks, Priya
... View more
10-15-2017
09:25 PM
Hello All, I have .har file on hdfs for which I am trying to check the list of files that it archived, but getting below error on CDH 5.9.2 cluster. [user1@usnbka700p ~]$ hdfs dfs -ls har:///user/user1/HDFSArchival/Output1/Archive-13-10-2017-03-10.har -ls: Fatal internal error java.lang.ArrayIndexOutOfBoundsException: 1 at org.apache.hadoop.fs.HarFileSystem$HarStatus.<init>(HarFileSystem.java:597) at org.apache.hadoop.fs.HarFileSystem$HarMetaData.parseMetaData(HarFileSystem.java:1201) at org.apache.hadoop.fs.HarFileSystem$HarMetaData.access$000(HarFileSystem.java:1098) at org.apache.hadoop.fs.HarFileSystem.initialize(HarFileSystem.java:166) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2711) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:102) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:315) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.FsShell.main(FsShell.java:372) However I can see size of .har file as below. hdfs dfs -du -s -h /user/user1/HDFSArchival/Output1/Archive-13-10-2017-03-10.har 16.5 G 49.5 G /user/user1/HDFSArchival/Output1/Archive-13-10-2017-03-10.har Also hdfs command hdfs dfs -ls works for other files. Kindly refer to below logs. hdfs dfs -ls har:///user/user1/HDFSArchival/Output1/Archive-12-10-2017-07-10.har Found 1 items drwxr-xr-x - user1 user1 0 2017-10-12 07:12 har:///user/user1/HDFSArchival/Output1/Archive-12-10-2017-07-10.har/ArchivalTemp Can you please suggest on this? Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
09-11-2017
04:49 AM
I forgot to add namespace along with tablename. now it's working fine. Thanks, Priya
... View more
09-11-2017
01:07 AM
Hi All, We are getting below exceptions for hbase load. As(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1912) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2017-09-11 01:27:18,399 WARN [main] org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch hbase:meta table: org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in hbase:meta for table: asset, row=asset,367abb38-dc1c-43d8-bcfa-5ca9cd560f7a,99999999999999 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:146) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1140) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1204) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1092) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1049) at org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:365) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:310) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:965) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1281) at org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.close(MultiTableOutputFormat.java:115) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:670) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1912) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2017-09-11 01:27:18,405 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:ipdevusr (auth:SIMPLE) cause:org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 14 actions: asset: 14 times, 2017-09-11 01:27:18,405 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 14 actions: asset: 14 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:192) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:176) at org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:913) at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:985) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1281) at org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat$MultiTableRecordWriter.close(MultiTableOutputFormat.java:115) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:670) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1912) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2017-09-11 01:27:18,411 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task 2017-09-11 01:27:18,515 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system... 2017-09-11 01:27:18,515 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped. 2017-09-11 01:27:18,516 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete. I checked list command and scan command through hbase shell , and both gave desirable output and no error. Please suggest. Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
08-30-2017
03:40 AM
/etc/hosts file look fine. Below is the output. [root@LinuxUL etc]# cat hosts 127.0.0.1 localhost 10.68.200.34 LinuxUL.ad.infosys.com LinuxUL 10.68.200.152 linux152.ad.infosys.com linux152 10.68.200.170 linux170.ad.infosys.com linux170 172.21.5.224 nfrsat01.ad.infosys.com nfrsat01 10.67.200.77 blrsat06.ad.infosys.com blrsat06 [root@LinuxUL etc]# netstat -ltnp | grep 7182 [root@LinuxUL etc]# netstat -ltnp | grep 9000 [root@LinuxUL etc]# netstat -ltnp | grep 9001 tcp 0 0 127.0.0.1:19001 0.0.0.0:* LISTEN 8086/python
... View more
08-30-2017
03:14 AM
Hi, There are also other errors that I came across in the log as below. {"name":"HeapDumpOnOutOfMemoryError","origin":"VM_CREATION","value":"true","writeable":true}, Error accessing http://LinuxUL.ad.infosys.com:9000/process/2718-hdfs-DATANODE/files/logs/stdout.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200 a:745) 2017-08-29 05:23:53,995 ERROR DataArchiver-22:com.cloudera.cmf.command.datacollection.AgentLogArchiver: Failed to collect agent log from host LinuxUL.ad.infosys.com java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at RROR DataArchiver-19:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2720-hdfs-JOURNALNODE/files/logs/stdout.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSo 5) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2017-08-29 05:23:53,995 ERROR DataArchiver-12:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2828-zookeeper-server/files/logs/stdout.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at ERROR DataArchiver-17:com.cloudera.cmf.command.datacollection.FullTextLogArchiver: Error collecting log /home/log/hadoop-0.20-mapreduce/hadoop-cmf-mapreduce-TASKTRACKER-LinuxUL.ad.infosys.com.log.out java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) a ERROR DataArchiver-24:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2718-hdfs-DATANODE/files/logs/stderr.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native ERROR DataArchiver-21:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2756-mapreduce-TASKTRACKER/files/logs/stderr.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at ERROR DataArchiver-2:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2867-hbase-REGIONSERVER/files/logs/stdout.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.Abstr ERROR DataArchiver-16:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2905-yarn-NODEMANAGER/files/logs/stderr.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) ERROR DataArchiver-15:com.cloudera.cmf.command.datacollection.ProcessStdoutStderrArchiver: Error accessing http://LinuxUL.ad.infosys.com:9000/process/2778-kafka-KAFKA_BROKER/files/logs/stdout.log java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at Please help. Thanks, Priya
... View more
08-30-2017
02:53 AM
Hello All, Agent on ont of the nodes on cluster is not running. status of agent is as below. sudo service cloudera-scm-agent status cloudera-scm-agent dead but pid file exists When I checked the logs for that host , I came across below error. (5 skipped) Unable to retrieve remote parcel repository manifest
java.util.concurrent.ExecutionException: java.net.ConnectException: http://archive.cloudera.com/spark/parcels/latest/manifest.json
at com.ning.http.client.providers.netty.NettyResponseFuture.abort(NettyResponseFuture.java:297)
at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:104)
at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:399)
at org.jboss.netty.channel.DefaultChannelFuture.addListener(DefaultChannelFuture.java:145)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.doConnect(NettyAsyncHttpProvider.java:1041)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.execute(NettyAsyncHttpProvider.java:858)
at com.ning.http.client.AsyncHttpClient.executeRequest(AsyncHttpClient.java:512)
at com.ning.http.client.AsyncHttpClient$BoundRequestBuilder.execute(AsyncHttpClient.java:234)
at com.cloudera.parcel.components.ParcelDownloaderImpl.getRepositoryInfoFuture(ParcelDownloaderImpl.java:530)
at com.cloudera.parcel.components.ParcelDownloaderImpl.getRepositoryInfo(ParcelDownloaderImpl.java:488)
at com.cloudera.parcel.components.ParcelDownloaderImpl.syncRemoteRepos(ParcelDownloaderImpl.java:342)
at com.cloudera.parcel.components.ParcelDownloaderImpl$1.run(ParcelDownloaderImpl.java:412)
at com.cloudera.parcel.components.ParcelDownloaderImpl$1.run(ParcelDownloaderImpl.java:407)
at com.cloudera.cmf.persist.ReadWriteDatabaseTaskCallable.call(ReadWriteDatabaseTaskCallable.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: http://archive.cloudera.com/spark/parcels/latest/manifest.json
at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
... 16 more
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:139)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:102)
at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:55)
at org.jboss.netty.handler.codec.http.HttpClientCodec.handleDownstream(HttpClientCodec.java:97)
at org.jboss.netty.handler.stream.ChunkedWriteHandler.handleDownstream(ChunkedWriteHandler.java:108)
at org.jboss.netty.channel.Channels.connect(Channels.java:642)
at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:204)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:230)
at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:183)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.doConnect(NettyAsyncHttpProvider.java:999)
... 13 more Also, under hosts parcels section I see error as Error when distributing to LinuxUL.ad.infosys.com : Host is in bad health. for two parcels CDH 5 and KAFKA. We have used http://www.cloudera.com/documentation/enterprise/5-4-x/topics/cm_ig_install_path_c.html for installation of cloudera cluster. We are using 3-node CDH 5.4 cluster. Can you please help me with this issue? Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
08-28-2017
03:40 AM
Hi marccasajus, Today again I saw the same error for agent as below. sudo service cloudera-scm-agent status cloudera-scm-agent dead but pid file exists I see I can see 2937-collect-host-statistics: stopped 2937-collect-host-statistics: removed process group 2940-host-inspector: stopped 2940-host-inspector: removed process group in agent.out And in agent.log, [25/Jun/2017 13:31:29 +0000] 8060 Monitor-SolrServerMonitor throttling_logger ERROR (52 skipped) Error fetching Solr core status at 'http://LinuxUL.ad.infosys.com:8983/solr//admin/cores?wt=json&action=STATUS' Please help. Thanks, Priya
... View more