Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2135 | 07-09-2019 12:53 AM | |
| 12463 | 06-23-2019 08:37 PM | |
| 9575 | 06-18-2019 11:28 PM | |
| 10540 | 05-23-2019 08:46 PM | |
| 4911 | 05-20-2019 01:14 AM |
06-14-2016
06:53 AM
Hi. Thank you for your support on this issue! We will proceed to upgrade our production cluster to 5.7.1 now that we know the details for this log entry.
... View more
06-13-2016
12:01 PM
This is my testing result at the Hadoop master node which used for namenode and hive server2. When I executing beeline for local file loading to a table, I met same error. It was the permission issue on the file to the hive user which is the owner of HiveServer2 process. It was solve when I grant read permission the file including whole path. Please check the file accessible permission as this. sudo -u hive cat ~/test.txt
... View more
06-10-2016
07:50 AM
Hi, Can you please please suggest, how to add a python script wrapped in a bash script which is located in some other hdfs /apps directory ? Regards, Murari
... View more
06-10-2016
05:44 AM
Actually this has been already resolved, we changed the create table statetment, added #b (hash b - as binary). create external table md_extract_file_status ( table_key string, fl_counter bigint ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,colfam:FL_Counter#b ) TBLPROPERTIES('hbase.table.name' ='HBTABLE');
... View more
06-10-2016
12:00 AM
You are using a very old hbase library version jar in your code/build. Please use the same version as the cluster, and I'd also recommend using maven to fetch those dependencies.
... View more
06-07-2016
09:27 PM
Glad to hear @AxelJ! Feel free to mark the thread resolved so its easier for others with similar problems to locate your solution.
... View more
06-06-2016
04:16 AM
1 Kudo
Thanks very much Harsh ! Appreciated !
... View more
06-01-2016
11:01 PM
Hi, Harsh I configured flume on AWS to write in s3 bucket, while running flume agent it throwing some error My flume sink config: tier1.sinks.sink1.type = hdfs tier1.sinks.sink1.channel = channel1 tier1.sinks.sink1.hdfs.path = s3n://ACCESS_KEY_ID:SECRET_ACCESS_KEY@bucketname/ tier1.sinks.sink1.hdfs.filePrefix = Flumedata tier1.sinks.sink1.hdfs.fileType = DataStream tier1.sinks.sink1.hdfs.writeFormat= Text tier1.sinks.sink1.hdfs.batchSize = 100 tier1.sinks.sink1.hdfs.rollCount = 0 tier1.sinks.sink1.hdfs.rollSize = 73060835 tier1.sinks.sink1.hdfs.rollInterval = 0 #tier1.sinks.sink1.hdfs.idleTimeout = 180 #tier1.sinks.sink1.hdfs.closeTries = 0 and error: 2016-06-01 18:17:53,737 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)] process failed java.lang.NoSuchMethodError: org.apache.http.impl.client.DefaultHttpClient.execute(Lorg/apache/http/client/methods/HttpUriRequest;)Lorg/apache/http/client/methods/CloseableHttpResponse; at amazon.emr.metrics.ClientUtil.getInstanceId(ClientUtil.java:115) at amazon.emr.metrics.MetricsConfig.getInstanceId(MetricsConfig.java:294) at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:195) at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:182) at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:177) at amazon.emr.metrics.MetricsSaver.ensureSingleton(MetricsSaver.java:652) at amazon.emr.metrics.MetricsSaver.addInternal(MetricsSaver.java:332) at amazon.emr.metrics.MetricsSaver.addValue(MetricsSaver.java:178) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1667) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1692) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1627) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:780) at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86) at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:246) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235) at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679) at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50) at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoSuchMethodError: org.apache.http.impl.client.DefaultHttpClient.execute(Lorg/apache/http/client/methods/HttpUriRequest;)Lorg/apache/http/client/methods/CloseableHttpResponse; at amazon.emr.metrics.ClientUtil.getInstanceId(ClientUtil.java:115) at amazon.emr.metrics.MetricsConfig.getInstanceId(MetricsConfig.java:294) at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:195) at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:182) at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:177) at amazon.emr.metrics.MetricsSaver.ensureSingleton(MetricsSaver.java:652) at amazon.emr.metrics.MetricsSaver.addInternal(MetricsSaver.java:332) at amazon.emr.metrics.MetricsSaver.addValue(MetricsSaver.java:178) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1667) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1692) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1627) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:780) at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86) at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:246) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235) at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679) at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50) at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) please help if you are familiar with this. Thanks in advance
... View more
06-01-2016
04:04 PM
Most Hadoop and HBase roles at least have a configuration servlet that you can visit over at their Web Port's /conf page, for ex. http://NNHOST:50070/conf, which would show you their loaded configuration.
... View more
06-01-2016
03:12 PM
NameNode by default would wait upto 10.5 minutes before declaring a non-heartbeating DataNode as dead and processing its block list as under-replicated. P.s. Its better to open a new topic per question, it helps others searching for specific Q&A.
... View more