Member since
12-14-2015
27
Posts
22
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17676 | 03-17-2016 08:39 AM |
02-24-2016
08:59 AM
1 Kudo
@Neeraj Sabharwal Thanks for the reply, In my case it's not a solution because when I'm doing hadoop checknative -a
I see that the snappy lib is true located at / usr/hdp/2.3.4.0-3485/hadoop/lib/native/libsnappy.so.1.
... View more
02-24-2016
08:52 AM
2 Kudos
@Artem Ervits I just made the test with the example of the definitve guide and I still have exactly the same error: Exception in thread "main" java.lang.RuntimeException: native snappy
library not available: this version of libhadoop was built without
snappy support. Any idea?
... View more
02-23-2016
03:22 PM
1 Kudo
Hi Artem, Thanks for the fast reply. I don't really understand how it will work without the compressedOutput.write(myLine.getBytes());
compressedOutput.write('\n'); } }
compressedOutput.flush();
compressedOutput.close();
How it will write to hdfs? Also if I remove the first part, when the configuration will be use? Can you give me an example because, I don't see how it works without the part that you specify :s Thanks in advance
... View more
02-23-2016
02:44 PM
2 Kudos
here's the piece of code: Path outFile = new Path(destPathFolder.toString() + "/" + listFolder[i].getName() + "_" + listFiles[b].getName() + ".txt");
FSDataOutputStream fin = dfs.create(outFile);
Configuration conf = new Configuration();
conf.setBoolean("mapreduce.map.output.compress", true);
conf.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.SnappyCodec");
CompressionCodecFactory codecFactory = new CompressionCodecFactory(conf);
CompressionCodec codec = codecFactory.getCodecByName("SnappyCodec");
CompressionOutputStream compressedOutput = codec.createOutputStream(fin);
FileReader input = new FileReader(listFiles[b]);
BufferedReader bufRead = new BufferedReader(input);
String myLine = null;
while ((myLine = bufRead.readLine()) != null) {
if (!myLine.isEmpty()) {
compressedOutput.write(myLine.getBytes());
compressedOutput.write('\n'); } }
compressedOutput.flush();
compressedOutput.close();
... View more
02-23-2016
02:37 PM
3 Kudos
Hi, I hope it's the right place to ask the following question 🙂 I try to put in hdfs a file with snappy compression. I write a Java code for that and when I try to run it on my cluster I got the following exception: Exception
in thread "main" java.lang.RuntimeException: native snappy library
not available: this version of libhadoop was built without snappy support.
at
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
at
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at
org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
at
org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:99) Apparently the snappy library is not available... I check on the os with the following cmd "rpm -qa | less | grep snappy" and snappy and snappy-devel is present. In the configuration of hdfs (core-site.xml) org.apache.hadoop.io.compress.SnappyCodec is present in the field io.compression.codecs. Does anyone has a idea why it's not working? Thanks in advance
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
02-16-2016
09:49 AM
1 Kudo
Thanks for the fast reply guys! 🙂
... View more
02-12-2016
02:05 PM
3 Kudos
Hi, Is there a way to specify how much ressource an user will be able to use for his HBase query? The objectif is to be able to define group A that can use 40 % of the ressouce and group B 60%, thoses 2 groups have to query the same hbase cluster. Some thing like we can do with the yarn queue manager but on every query lunch by every user? Thanks for the info, Michel
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
02-01-2016
01:22 PM
Okok. Stupid question: How to show the list of the table in phoenix from zeppelin. I try: show table, show tables, list table, !table Noone is working, is it normal? Thanks 🙂
... View more
02-01-2016
01:03 PM
2 Kudos
Hi, I setup Zeppelin, %spark and %hive is workign perfectly but I have somethin special with %sql For the context, I have hbase table and hive is able to query the hbase table. When I'm doing %sql show tables I can see all the table name but when I doing hte following query: %sql Select * from table1 Then I got the following error: MetaException(message:java.lang.ClassNotFoundException Class
org.apache.hadoop.hive.hbase.HBaseSerDe not found)
at
org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:346) at
org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:288) at
org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:281)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:631)
at
org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:189)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:1017)
at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getTableOption$1.apply(ClientWrapper.scala:202) at
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$getTableOption$1.apply(ClientWrapper.scala:198) at
org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:156) at
org.apache.spark.sql.hive.client.ClientWrapper.getTableOption(ClientWrapper.scala:198) at
org.apache.spark.sql.hive.client.ClientInterface$class.getTable(ClientInterface.scala:112) at
org.apache.spark.sql.hive.client.ClientWrapper.getTable(ClientWrapper.scala:61) at
org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:227) at .................... Any idea? Thanks in advance, Michel
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
-
Apache Zeppelin
01-27-2016
03:29 PM
Hi, I'm new to Nifi, I would like to know how to download file from multiple servers in parallel (SFTP)? The number of servers can change over the time, the list of server(hostname) is store in Hive. So my second question is, how to have as the input of the getSFP the result of the hive query that may content several hostname? I don't clear see how to do that in the documentation, anyone can help me? Thanks in advance, Michel
... View more
Labels:
- Labels:
-
Apache NiFi
- « Previous
-
- 1
- 2
- Next »