Member since
04-22-2016
931
Posts
46
Kudos Received
26
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1385 | 10-11-2018 01:38 AM | |
1787 | 09-26-2018 02:24 AM | |
1712 | 06-29-2018 02:35 PM | |
2264 | 06-29-2018 02:34 PM | |
5127 | 06-20-2018 04:30 PM |
11-04-2020
11:36 PM
Thank You @Seaport for sharing the Solution for wider audience. In short, you followed the Link [1] to resolve the Protocol Header Issue as utilised by the User in Link [2]. - Smarak [1] https://github.com/python-happybase/happybase/issues/161 [2] https://community.cloudera.com/t5/Support-Questions/Sharing-how-to-solve-HUE-and-HBase-connect-problem-on-CDH-6/td-p/82030
... View more
07-22-2020
05:45 AM
The given solution is certainly not true unfortunately. In HDFS a given block if it is open for write, then consumes 128MB that is true, but as soon as the file is closed, the last block of the file is counted just by the length of the file. So if you have a 1KB file, that consumes 3KB disk space considering replication factor 3, and if you have a 129MB file that consumes 387MB disk space again with replication factor 3. The phenomenon that can be seen in the output was most likely caused by other non-DFS disk usage, that made the available disk space for HDFS less, and had nothing to do with the file sizes. Just to demonstrate this with a 1KB test file: # hdfs dfs -df -h Filesystem Size Used Available Use% hdfs://<nn>:8020 27.1 T 120 K 27.1 T 0% # fallocate -l 1024 test.txt # hdfs dfs -put test.txt /tmp # hdfs dfs -df -h Filesystem Size Used Available Use% hdfs://<nn>:8020 27.1 T 123.0 K 27.1 T 0% I hope this helps to clarify and correct this answer.
... View more
06-02-2020
05:14 AM
Hi, Would you please elaborate on why Hive configuration is needed? Thanks
... View more
01-08-2020
01:46 AM
You have to do this on the node where resource manager runs. on other nodes, this directory would be empty only. so this should be good.
... View more
01-02-2020
07:08 AM
Hi,
I need to uninstall Ranger/Ranger KMS 1.2.0, how can I do it?
Thank,
Neelagandan K
... View more
12-05-2019
11:06 PM
Do I need to restart the services.I added the below property but no luck. dfs.namenode.acls.enabled=true
... View more
10-21-2019
03:25 PM
I could do the Map process on OrcFile, but Reduce fails with ‘.Can’t input data OCR[]’ error. Do you have some official documentation that confirm that OCR file does not work with incremental lastmodified import?
... View more
05-22-2019
08:41 PM
@Sami Ahmad When you have set up your ambari.repo correctly on Linux you need to do the following # yum repolist
# yum install -y ambari-server
# yum install -y mysql-connector-java
# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java That should pick the correct version of MySQL driver for your ambari if you indeed to run on MySQL or MariaDB # yum install -y mariadb-server To get the mysql-connect version here are the steps # zipgrep 'Bundle-Version' mysql-connector-java.jar output META-INF/MANIFEST.MF:Bundle-Version: 5.1.25 HTH
... View more
03-06-2019
11:15 PM
Welcome to Phoenix... where the cardinal rule is if you are going to use Phoenix, then for that table, don't look at it or use it directly from the HBase API. What you are seeing is pretty normal. I don't see your DDL, but I'll give you an example to compare against. Check out the DDL at https://github.com/apache/phoenix/blob/master/examples/WEB_STAT.sql and focus on the CORE column which is a BIGINT and the ACTIVE_VISITOR column which is INTEGER. Here's the data that gets loaded into it; https://github.com/apache/phoenix/blob/master/examples/WEB_STAT.csv. Here's what it looks like via Phoenix... Here's what it looks like through HBase shell (using the API)... Notice the CORE and ACTIVE_VISITOR values looking a lot like your example? Yep, welcome to Phoenix. Remember, use Phoenix only for Phoenix tables and you'll be all right. 🙂 Good luck and happy Hadooping/HBasing!
... View more