Member since
10-27-2014
38
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2945 | 04-07-2015 01:13 AM | |
16718 | 12-18-2014 02:21 AM |
07-18-2018
09:09 AM
thank you for your reply
... View more
09-14-2017
02:54 PM
When you run OfflineMetaRepair, most likely you will run it from your userid or root. Then we may get some opaque errors like "java.lang.AbstractMethodError: org.apache.hadoop.hbase.ipc.RpcScheduler.getWriteQueueLength()". If you check in HDFS, you may see that the meta directory is no longer owned by hbase: $ hdfs dfs -ls /hbase/data/hbase/
Found 2 items
drwxr-xr-x - root hbase 0 2017-09-12 13:58 /hbase/data/hbase/meta
drwxr-xr-x - hbase hbase 0 2016-06-15 15:02 /hbase/data/hbase/namespace Manually chown -R it and restart HBase fixed it for me.
... View more
04-15-2015
09:58 PM
Oh, I finally do it, follow is my hql: SELECT id, part.lock, part.key FROM mytable EXTERNAL VIEW explode(parts) parttable AS part; many thanks chrisf !
... View more
04-07-2015
01:13 AM
Found my solution, i need to add 2 file: + db.hsqldb.properties + dn.hsqldb.script to the oozie job, then the job just work fine, still don't understand why because i don't need these 2 file when import.
... View more
02-03-2015
07:58 AM
1 Kudo
Each file uses a minimum of one block entry (though that block will only be the size of the actual data). So if you are adding 2736 folders each with 200 files that's 2736 * 200 = 547,200 blocks. Do the folders represent some particular partitioning strategy? Can the files within a particular folder be combined into a single larger file? Depending on your source data format, you may be better off looking at something like Kite to handle the dataset management for you.
... View more
01-07-2015
07:18 PM
2 Kudos
> GenericJDBCException: Could not open connection This implies Cloudera Manager is unable to connect to the database. If you're using the embedded postgresql database, ensure it is running. Visit the file /etc/cloudera-scm-server/db.properties. Use the authentication creds in it and try to connect to the database directly, like "psql -U user -p password database". Does it work?
... View more
12-18-2014
09:00 PM
1 Kudo
This behavior is like any system with float representation, float bit representation does not allow exact representation of most values. There is a lot of literature on the subject like http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html If you want exact values it's better to go with decimal, where you can specify scale and precision. Thanks Szehon
... View more
12-18-2014
02:21 AM
Hello masfworld, I've found my solution here, hope this'll help you too: http://community.cloudera.com/t5/Cloudera-Search-Apache-SolrCloud/Solr-Server-not-starting/m-p/4839#M97
... View more
11-19-2014
09:01 PM
Hi romain, I'm using CDH 5.2, i've add "[desktop] app_blacklist=" to the Service and Wide Configuration tab of Hue on Cloudera Manager but Spark still not show up on Hue. I also try to edit the hue.ini in /etc/hue/conf but nothing happen to, the desktop configuration tab on Hue still show this: app_blacklist spark Comma separated list of apps to not load at server startup. Default: Can you provide any help ? Thanks !
... View more
11-19-2014
01:53 AM
It looks like you asked for more resources than you configured YARN to offer, so check how much you can allocate in YARN and how much Spark asked for. I don't know about the ERROR; it may be a red herring. Please have a look at http://spark.apache.org/docs/latest/ for pretty good Spark docs.
... View more