Member since
02-11-2014
162
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4210 | 12-04-2015 12:46 PM | |
5793 | 02-12-2015 01:06 PM | |
4604 | 03-20-2014 12:41 PM | |
9817 | 03-19-2014 08:54 AM |
02-01-2019
02:39 PM
@Harsh J How would I do this just for one job ?. I tried using below setting but it is not working. The issue is that I want to use a version of jersey which I bundled into my fat jar,however gateway node has an older version of that jar and it loads a class from there resulting in a NosSuchMethodException .My application is not a map reduce job and I run it by using hadoop jar and running on 5.14.4 export HADOOP_USER_CLASSPATH_FIRST=true export HADOOP_CLASSPATH=/projects/poc/test/config:$HADOOP_CLASSPATH
... View more
09-04-2018
04:08 PM
LOL... just checked the dates. It has been quite a while since Neta Je posted this (09-03-2015!) I just saw your response, @Koti_Karri and followed along ;-).
... View more
01-16-2018
01:33 AM
Hi, It's been a while ! If I remember correctly, we did not find any solution back then (with CDH5.3.0) - at least other than recreating the collection and re-indexing the data. But after upgrading the CDH version using a version of Solr supporting the "ADDREPLICA" and "DELETEREPLICA" functions in the API you can add an other replica and then delete the one which is down. regards, mathieu
... View more
12-29-2017
07:35 PM
Why not compacting the historical data ... for example compact daily files into one file for now-14days. A compaction job that runs daily and compact the data before 2 weeks. By this you can make sure you are not imapcting the data freshness.
... View more
09-15-2016
05:50 AM
hi ....... i am runnnig cloudera sandbox on Vmware . i want to connect from wiindow using java program want to create table in hbase.
... View more
07-07-2016
07:31 PM
LazyOutputFormat is available for both APIs. Here's the one for the older API: http://archive.cloudera.com/cdh5/cdh/5/hadoop/api/org/apache/hadoop/mapred/lib/LazyOutputFormat.html
... View more
03-07-2016
04:33 PM
Thanks Harsh. Yes my key was a void .So I changed the avro output to use keyOutputformat (earlier value is now the key) from Avrokeyvalueoutputformat and it worked. Thanks, Nishanth
... View more
12-30-2015
05:27 AM
Hi Darren, We had a similar problem while installing Cloudera Manager 5.5. The cloudera-scm-server process was failing after a few seconds. The scm-server.out file indicated that the log4j file was not available. We uninstalled CM and tried re-installing it again; however, the problem persisted. As per your suggestion, we manually created a file log4j.properties and added the contents. We then 1) Stopped cloudera-scm-server-db 2) restarted postgresql 3) started cloudera-scm-server-db 4) started cloudera-scm-server and it worked!! We were immediately able to access CM via the browser. Thanks a lot for your help. Regards, Yogesh
... View more
12-04-2015
12:46 PM
I tried using AvroParquetOutputFormat and MultipleOutputs class and was able to generate parquet files for a specific schema type.For the other schema type I am running into the below error.Any help is appreciated? java.lang.ArrayIndexOutOfBoundsException: 2820 at org.apache.parquet.io.api.Binary.hashCode(Binary.java:489) at org.apache.parquet.io.api.Binary.access$100(Binary.java:34) at org.apache.parquet.io.api.Binary$ByteBufferBackedBinary.hashCode(Binary.java:382) at org.apache.parquet.it.unimi.dsi.fastutil.objects.Object2IntLinkedOpenHashMap.getInt(Object2IntLinkedOpenHashMap.java:587) at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainBinaryDictionaryValuesWriter.writeBytes(DictionaryValuesWriter.java:235) at org.apache.parquet.column.values.fallback.FallbackValuesWriter.writeBytes(FallbackValuesWriter.java:162) at org.apache.parquet.column.impl.ColumnWriterV1.write(ColumnWriterV1.java:203) at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.addBinary(MessageColumnIO.java:347) at org.apache.parquet.avro.AvroWriteSupport.writeValue(AvroWriteSupport.java:257) at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:167) at org.apache.parquet.avro.AvroWriteSupport.writeRecord(AvroWriteSupport.java:149) at org.apache.parquet.avro.AvroWriteSupport.writeValue(AvroWriteSupport.java:262) at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:167) at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:142) at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121) at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123) at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42) at org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat$LazyRecordWriter.write(LazyOutputFormat.java:115) at org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:457) at com.visa.dps.mapreduce.logger.LoggerMapper.map(LoggerMapper.java:271)
... View more
02-18-2015
05:55 AM
Thank you so much for your advices Stefano
... View more