Member since
05-11-2016
42
Posts
2
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1985 | 02-07-2018 06:22 AM | |
1606 | 11-13-2017 08:04 AM | |
1794 | 07-20-2017 03:01 AM |
11-08-2017
04:51 AM
Sorry for my stupid question. Can we use Knox for legacy hadoop clients (Edge Node/Haddoop CLIs) with RPC? As the page (http://pivotalhd.docs.pivotal.io/docs/knox-gateway-administration-guide.html) explains, I think, we can not use Knox for legacy hadoop clients... I also think, if we want to control security between clients and hadoop clusters (i.e. use Knox as the security "proxy" between clients and hadoop clusters), we have to eliminate hadoop clients on Edge Node, because Knox is only for O/JDBC hadoop client. Are my understandings right?
... View more
Labels:
- Labels:
-
Apache Knox
10-20-2017
05:23 PM
We have completely same problem as https://issues.apache.org/jira/browse/HDFS-11797 with HDP 2.6.1. https://issues.apache.org/jira/browse/HDFS-11797?focusedCommentId=16039577&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16039577 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent number of corrupt replicas for blk_123456789_123456 blockMap has 0 but corrupt replicas map has 1
org.apache.hadoop.ipc.Server: IPC Server handler 34 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from xxx.xxx.xxx.xxx:xxxxx Call#91 Retry#0 java.lang.ArrayIndexOutOfBoundsException Actually, our hive client fails for this problem. and hdfs fsck command also fails for this hdfs file with this problem. I read a series of JIRA tickets. https://issues.apache.org/jira/browse/HDFS-9958 https://issues.apache.org/jira/browse/HDFS-10788 https://issues.apache.org/jira/browse/HDFS-11797 https://issues.apache.org/jira/browse/HDFS-11445 https://issues.apache.org/jira/browse/HDFS-11755 At the second last comment of HDFS-11755 https://issues.apache.org/jira/browse/HDFS-11755?focusedCommentId=16200946&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16200946 As discussed in HDFS-11445, a regression caused by HDFS-11445 is fixed by HDFS-11755. I'd like to backport HDFS-11755 into branch-2.7 as a result. and, https://issues.apache.org/jira/browse/HDFS-11755?focusedCommentId=16201164&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16201164 Filed HDFS-12641 to initiate the discussion. and https://issues.apache.org/jira/browse/HDFS-12641 is not resolved. I'm not sure, but HDFS-12641 may be only for CDH??? I've also checked, HDFS-11445 is not included in HDP 2.6.1. but, it's included in HDP 2.6.2. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_release-notes/content/patch_hadoop.html https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_release-notes/content/patch_hadoop.html So, can someone confirm that our current problem with message "blockMap has 0 but corrupt replicas map has 1" is safely fixed with HDP 2.6.2 with HDFS-11445? Actually, we plan to upgrade from HDP 2.6.1 to HDP 2.6.2. but I'm worry about that upgrading to HDP 2.6.2 would make new problem such as HDFS-11755 says "a regression caused by HDFS-11445 is fixed by HDFS-11755."... I've confirmed HDFS-11755 is not included in HDP 2.6.1, HDP 2.6.2.
... View more
Labels:
- Labels:
-
Apache Hadoop
09-08-2017
04:13 AM
Sorry, There was copy and paste miss in question above. Now we use "-XX:CMSInitiatingOccupancyFraction=90" for current NameNode with CMS.
... View more
09-07-2017
12:17 PM
Now I are trying to change GC from CMS to G1GC. And let's say, current situation of NameNode with CMS are Physical memory size : 140 GB
-Xmx100G -Xms100G
current actual heap usage : 70 - 80 GB (So, usage is around 80%.)
-XX:InitiatingHeapOccupancyPercent : 90
The default value of "-XX:InitiatingHeapOccupancyPercent" is 45.
If I set 45% for "-XX:InitiatingHeapOccupancyPercent" for this NameNode, I think, current heap usage always hits the threshold... Could you advice how I should tune "-XX:InitiatingHeapOccupancyPercent" for this NameNode?
... View more
Labels:
- Labels:
-
Apache Hadoop
07-20-2017
03:01 AM
I think, no response means you guys do not recommend this my use case. I decided to follow install guide. Thanks!
... View more
07-18-2017
07:29 AM
Can you someone answer to my question? If my question is not clear, please let me know 🙂
... View more
07-13-2017
11:31 AM
Yes, it relates to my question. I'm asking about "3. On each node (specified by their fully qualified domain names), create the host and headless principals, and a keytab with each:" I think, this part says, we need to create keytab file for each nodes (for all nodes with NodeManager) and put it in "OS local directory (/etc/security/keytabs)" on each node to launch LLAP daemons. Of course, I can follow this procedure, but if possible, I want to avoid putting the keytab files on OS local directory for our administration reason. As you may know, when we launch HBase with Slider on Yarn, we can put required keytab files to launch HBase components such as HBase Master, RegionServers on hdfs instead of putting keytab files on OS local directory. In this case, we don't need to put the keytab files on OS local directory on each node. Instead, we just need to put keytab file with principals for all nodes on hdfs and configure appConfig.json to make Hbase components use the keytab file on hdfs. So, I'm asking whether we can do the same to launch LLAP daemons or not.
... View more
07-13-2017
05:25 AM
I know in case we launch hbase cluster with slider on yarn, we can put keytab files on hdfs to launch hbase components by adding followings to appConfig.json instead of putting keytab files on local directory /etc/security/keytabs. "site.hbase-site.hbase.regionserver.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE", "site.hbase-site.hbase.regionserver.keytab.file": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.service.keytab",
"site.hbase-site.hbase.master.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE", "site.hbase-site.hbase.master.keytab.file": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.service.keytab", Can we do same thing for launching LLAP daemons with Slider?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache YARN
-
HDFS
-
Kerberos
-
Security
07-11-2017
01:46 AM
@Rajkumar Singh thank you so much for your help!
... View more
07-10-2017
09:38 AM
Thank you for quick and clear answer. I understood we have to enable Ranger for LLAP. BTW, can we enable Ranger only for LLAP (HiveServer2) for the first step? I'm asking it because it's a little hard to add Ranger (plugins) for already existing hadoop core components such as HDFS (NameNode/DataNodes), Yarn (ResourceManager/NodeManagers). We plan to build a new server to launch LLAP (Hive2 HiveServer2 & LLAP with Slider & new MetaStore DB), so if we can enable Ranger only for new LLAP for now, it would be really easier for us than enabling Ranger for all existing hadoop components.
... View more
- « Previous
-
- 1
- 2
- Next »