Member since
08-24-2016
27
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5825 | 06-27-2018 04:59 PM | |
2981 | 12-02-2017 02:21 PM |
12-09-2018
06:12 AM
@Matt Burgess, Any advise please?
... View more
12-08-2018
03:29 PM
1 Kudo
Hi There, How I can tail a file in HDFS continuously through Nifi.Expected result should be similar to the output of below command hdfs dfs -tail -f filename Can I make use of nifi processor -GetHDFSEvents?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
11-27-2018
12:21 PM
Thanks @Prabhu M ... I have configured MetricsReportingTask for our process group but where I can see the output of metrics reporting task? Is it possible to make rest call to get output of MetricsReportingTask ? Thanks a lot!
... View more
11-22-2018
03:43 AM
Hi There, I have got a requirement to audit changes of nifi processors statuses like start, running or stopped in a database to track what time nifi processors or a process group have been stopped or started by an individual user. How I can implement this requirement in NiFi ? Is there any way by which I can get status of nifi workflow either at processor level or at process group level? Thanks!
... View more
Labels:
- Labels:
-
Apache NiFi
08-30-2018
08:23 AM
@Sindhu Thank you!
... View more
08-28-2018
12:26 PM
Hi There, I am trying to import data from Informix non transactional database to HDFS but getting below error..Our database is defined as non transactional. Is there any way by which we can import data from non transactional DB through sqoop ? I know this issue has already been raised below. https://issues.apache.org/jira/browse/SQOOP-2951 sqoop list-tables --connect jdbc:informix-sqli://XXX:1530/XXX:INFORMIXSERVER=XXX --driver com.informix.jdbc.IfxDriver --username XXX --P
ERROR manager.SqlManager: Error reading database metadata: java.sql.SQLException: No Transaction Isolation on non-logging db's
java.sql.SQLException: No Transaction Isolation on non-logging db's
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:373)
at com.informix.jdbc.IfxSqliConnect.setTransactionIsolation(IfxSqliConnect.java:2438)
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:910)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.listTables(SqlManager.java:539)
at org.apache.sqoop.tool.ListTablesTool.run(ListTablesTool.java:49)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Could not retrieve tables list from server
... View more
Labels:
- Labels:
-
Apache Sqoop
07-26-2018
07:06 PM
@murthy kurra, I am also experiencing the same issue. Please let me know how did you resolve this issue? Thanks in advance.
... View more
06-27-2018
04:59 PM
@Geoffrey Shelton Okot, Today, I have managed to format namenode successfully. One of the journal node's meta data was not in sync with other two journal nodes. That 's the reason problematic journal node was getting locked every time when I was trying to format the name node. After copying data from good journal node's directory to problematic journal node 's directory,allowed me to format namenode. I also deleted in_use.lock files from all 3 journal nodes before executing hdfs namenode -format command. Thank you so much for your assistance on this.
... View more
06-26-2018
08:26 PM
@Geoffrey Shelton Okot, I have got 3 journal nodes and 3 zookeepers. I have removed the locks file & ran hdfs namenode -format. I noticed that in_use.lock file is being created by the namenode -format command.
... View more
06-26-2018
07:08 PM
Thank you so much @Geoffrey Shelton Okot for your assistance as always. I have restarted journal nodes but it did not help. Would it be nice if I delete all the the journal nodes and re add them again or completely delete hdfs services and install namenode and datanodes from scratch? Please advise if you have any other better way to solve this issue?
... View more
06-26-2018
03:53 PM
Hi All, We were experiencing issue with 4 of data nodes which were not sending the block reports to name node. In order to resolve that issue, we have formatted all the data node dirs for all the data nodes and decommissioned and recommissioned the data nodes.Also deleted all the data from hdfs. when I am trying to format the name node. I am getting below error. 18/06/26 16:32:05 WARN namenode.NameNode: Encountered exception during format:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
10.217.99.13:8485: Cannot lock storage /hadoop/hdfs/journal/HDPDRHA. The directory is already locked
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:743)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:551)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:502)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.analyzeAndRecoverStorage(JNStorage.java:227)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.<init>(JNStorage.java:76)
at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:143)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:99)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.isFormatted(JournalNodeRpcServer.java:120)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.isFormatted(QJournalProtocolServerSideTranslatorPB.java:103)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:965)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:179)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1631)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
18/06/26 16:32:05 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
10.217.99.13:8485: Cannot lock storage /hadoop/hdfs/journal/HDPDRHA. The directory is already locked
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:743)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:551)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:502)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.analyzeAndRecoverStorage(JNStorage.java:227)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.<init>(JNStorage.java:76)
at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:143)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:99)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.isFormatted(JournalNodeRpcServer.java:120)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.isFormatted(QJournalProtocolServerSideTranslatorPB.java:103)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:965)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:179)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1631)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
18/06/26 16:32:05 INFO util.ExitUtil: Exiting with status 1
18/06/26 16:32:05 INFO namenode.NameNode: SHUTDOWN_MSG:
I stopped both name nodes ,only journal nodes are online when executing below command. hadoop namenode -format Any idea why I am getting above error?.. Thank you so much for your assistance on this.
... View more
Labels:
- Labels:
-
Apache Hadoop
06-09-2018
09:56 PM
Thanks a lot @Shu, I am getting below error now curl: (35) SSL connect error
are you aware about error?
... View more
06-09-2018
09:08 PM
Hi There, I am getting below error on my solr web UI on HDP 2.6.0 Cluster. org.apache.solr.common.SolrException: Exception writing document id fb076ca8-9261-4715-990f-f563da0a02ed-1857434654 to the index; possible analysis error: number of documents in the index cannot exceed 2147483519
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:173)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory so I tried to delete all solr indexes using below command curl http://LocalHost:8886/solr/update?commit=true -H "Content-type: text/xml"--data-binary '<delete><query>*:*</query></delete>'
but unable to delete the indexes and getting below error url: (6) Couldn't resolve host '<delete><query>*:*< Is there any alternative way to delete solr indexes completely or can we define the TTL to delete the solr indexes/data at a regular interval basis? Please assist. Thanks!
... View more
Labels:
- Labels:
-
Apache Ranger
-
Apache Solr
06-06-2018
01:00 PM
@Geoffrey Shelton Okot ,Thank you so much for getting back to me. We don't have rack awareness enabled on our DR cluster as it's 8 data nodes cluster only. we do have rack awareness in our production cluster. We can enable rack awareness later but my first priority is to get back the blocks on data nodes as faulty data nodes are not sending any block report to name node. Here is current status as of today. I am still getting the EOFException error on problematic data nodes other data nodes are not giving this error. I checked with our network team & they said all the data nodes are connected to same NIC and there is no packet loss. Hardware team found some correctable memory errors but nothing major. Is there any maximum number of blocks retention limits for a particular data node? I meant that is there any possibility that max. number of blocks retention limit has been exceeded for problematic data nodes & because of that they stopped sending the block report to name node due to some capacity/resource constraints? Please guide.Do I need to report this as a bug to apache foundation? java.io.EOFException: End of File Exception between local host is: "DATANODE HOST"; destination host is: "NAMENDOE HOST":8020; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy15.blockReport(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:211)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:374)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:645)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:785)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1119)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1014)
... View more
06-01-2018
06:03 AM
@Geoffrey Shelton Okot Yes, I have been through the post mentioned by you. We had data nodes failure issues in past ,increase heap size fixed it but I will fine tune them. Below is heap utilization for data node (max heap 30 GB). High heap usage data nodes (marked in red) are the problematic ones. Hadoop env SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=800m -XX:MaxNewSize=800m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=ERROR,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC "
export HADOOP_SECONDARYNAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\" ${HADOOP_SECONDARYNAMENODE_OPTS}"
You mentioned "A GC allocation failure means that the garbage collector could not move objects from young gen to old gen fast enough because it does not have enough memory in old gen. which parameter holds values for old gen? we have got 8 data nodes, CPU 2*8 ,memory 256 GB, Disk -12*6 =72 TB 8 hosts of 72 TB each = 576 TB.
our cluster Blocksize=128 MB, Replication=3 Cluster capacity in MB: 8* 72,000,000 MB = 576,000,000 MB (576TB) Disk space needed per block: 128 MB per block * 3 = 384 MB storage per block Cluster capacity in blocks: 576,000,000 MB / 384 MB = 1,500,000 blocks But ambari is reporting 156,710872 blocks, am I missing something here? Await for your response. Thank you so much!
... View more
05-31-2018
05:38 AM
Thank you! I really appreciate your time and efforts. 1. Data node heap size is 30 GB.My worry is that why only 3 nodes are giving the issue not others if something is wrong with configuration. what is should be ideal heap size for data nodes do you have any idea? I did not find any formula to calculate the heap size for data nodes. 2. We are using name node HA. I suspect that HA switch over might have caused this problem.I have restarted all the components.what should I check for if issue is caused by name node HA.?Name node heap size is 75 GB ..used 70%.
... View more
05-30-2018
06:50 PM
Thank you so much @Geoffrey Shelton Okot for assistance on this. I really appreciate it. 1. MTU setting is same for all our data nodes. I have verified it. 2. I have performed testdfsio test .Pls see the attachment for test results. 3. Enable GC debugging.my hadoop-env template looks like below. export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=800m -XX:MaxNewSize=800m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=ERROR,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC" After enabling GC debugging & restarting name nodes and data nodes.Below alarm disappeared Unable to extract JSON from JMX response error But now ,I am getting below error now on problematic data node in hadoop-hdfs-datanode-.log 2018-05-30 19:53:32,985 WARN datanode.DataNode (BPServiceActor.java:offerService(673)) - IOException in offerService
java.io.EOFException: End of File Exception between local host is: "datanodehost/"; destination host is: "Namenodehost":8020; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy15.blockReport(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:211)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:374)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:645)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:785)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFExceptionat java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1119)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1014)
2018-05-30 19:53:33,100 INFO datanode.DataNode (DataXceiver.java:writeBlock(669)) - Receiving BP-1033621575--1507285615620:blk_1461467777_387788610 src: /:42658 dest: /:50010
2018-05-30 19:53:33,878 INFO datanode.DataNode (DataXceiver.java:writeBlock(669)) - Receiving BP-1033621575--1507285615620:blk_1461467782_387788615 src: /:43782 dest: /:50010
2018-05-30 19:53:36,197 INFO datanode.DataNode (DataXceiver.java:writeBlock(669)) - Receiving BP-1033621575--1507285615620:blk_1368137451_294431710 src: /:52176 dest: /:50010 GC.log 9239114K(31375360K), 0.0954324 secs] [Times: user=0.75 sys=0.00, real=0.10 secs]
2018-05-30T20:37:23.000+0200: 15180.545: [GC (Allocation Failure) 2018-05-30T20:37:23.000+0200: 15180.545: [ParNew: 733378K->81919K(737280K), 0.0994234 secs] 9892898K->9739137K(31375360K), 0.0996623 secs] [Times: user=0.78 sys=0.01, real=0.10 secs]
2018-05-30T20:37:29.962+0200: 15187.508: [GC (Allocation Failure) 2018-05-30T20:37:29.963+0200: 15187.508: [ParNew: 727808K->81689K(737280K), 0.1043798 secs] 10385026K->10379938K(31375360K), 0.1046235 secs] [Times: user=0.83 sys=0.00, real=0.11 secs]
2018-05-30T20:37:33.884+0200: 15191.430: [GC (Allocation Failure) 2018-05-30T20:37:33.885+0200: 15191.430: [ParNew: 733664K->81919K(737280K), 0.1201577 secs] 11031913K->10881691K(31375360K), 0.1203890 secs] [Times: user=0.95 sys=0.00, real=0.12 secs]
2018-05-30T20:37:41.029+0200: 15198.574: [GC (Allocation Failure) 2018-05-30T20:37:41.029+0200: 15198.575: [ParNew: 727734K->78326K(737280K), 0.1015139 secs] 11527506K->11522912K(31375360K), 0.1017500 secs] [Times: user=0.81 sys=0.00, real=0.10 secs]
2018-05-30T20:37:44.780+0200: 15202.325: [GC (Allocation Failure) 2018-05-30T20:37:44.780+0200: 15202.325: [ParNew: 730789K->81920K(737280K), 0.0937630 secs] 12175374K->12020024K(31375360K), 0.0939903 secs] [Times: user=0.74 sys=0.00, real=0.09 secs]
2018-05-30T20:37:51.818+0200: 15209.363: [GC (Allocation Failure) 2018-05-30T20:37:51.818+0200: 15209.363: [ParNew: 723037K->78409K(737280K), 0.1089323 secs] 12661141K->12638859K(31375360K), 0.1091735 secs] [Times: user=0.87 sys=0.01, real=0.11 secs]
2018-05-30T20:37:55.071+0200: 15212.616: [GC (Allocation Failure) 2018-05-30T20:37:55.071+0200: 15212.616: [ParNew: 733424K->81919K(737280K), 0.0912281 secs] 13293874K->13139143K(31375360K), 0.0914462 secs] [Times: user=0.72 sys=0.00, real=0.09 secs]
2018-05-30T20:38:02.582+0200: 15220.127: [GC (Allocation Failure) 2018-05-30T20:38:02.582+0200: 15220.127: [ParNew: 731000K->80436K(737280K), 0.1039197 secs] 13788224K->13781232K(31375360K), 0.1041447 secs] [Times: user=0.82 sys=0.00, real=0.10 secs]
2018-05-30T20:38:05.811+0200: 15223.356: [GC (Allocation Failure) 2018-05-30T20:38:05.811+0200: 15223.356: [ParNew: 734976K->81919K(737280K), 0.0843448 secs] 14435772K->14285826K(31375360K), 0.0845672 secs] [Times: user=0.67 sys=0.00, real=0.09 secs]
2018-05-30T20:38:13.249+0200: 15230.794: [GC (Allocation Failure) 2018-05-30T20:38:13.249+0200: 15230.794: [ParNew: 725770K->80833K(737280K), 0.0967994 secs] 14929677K->14924119K(31375360K), 0.0970191 secs] [Times: user=0.76 sys=0.00, real=0.10 secs]
2018-05-30T20:38:16.685+0200: 15234.231: [GC (Allocation Failure) 2018-05-30T20:38:16.686+0200: 15234.231: [ParNew: 735203K->81920K(737280K), 0.0984436 secs] 15578489K->15419615K(31375360K), 0.0986753 secs] [Times: user=0.78 sys=0.00, real=0.10 secs]
2018-05-30T20:38:24.385+0200: 15241.930: [GC (Allocation Failure) 2018-05-30T20:38:24.385+0200: 15241.930: [ParNew: 735008K->79750K(737280K), 0.0981608 secs] 16072704K->16066284K(31375360K), 0.0983850 secs] [Times: user=0.78 sys=0.00, real=0.09 secs]
2018-05-30T20:38:27.513+0200: 15245.058: [GC (Allocation Failure) 2018-05-30T20:38:27.513+0200: 15245.058: [ParNew: 731825K->81920K(737280K), 0.0928862 secs] 16718359K->16566812K(31375360K), 0.0931079 secs] [Times: user=0.73 sys=0.00, real=0.10 secs]
2018-05-30T20:38:35.118+0200: 15252.664: [GC (Allocation Failure) 2018-05-30T20:38:35.119+0200: 15252.664: [ParNew: 728589K->81823K(737280K), 0.1155139 secs] 17213482K->17208899K(31375360K), 0.1157287 secs] [Times: user=0.91 sys=0.01, real=0.11 secs]
2018-05-30T20:38:39.004+0200: 15256.549: [GC (Allocation Failure) 2018-05-30T20:38:39.004+0200: 15256.549: [ParNew: 735843K->81920K(737280K), 0.0939004 secs] 17862919K->17682067K(31375360K), 0.0941023 secs] [Times: user=0.74 sys=0.00, real=0.10 secs]
2018-05-30T20:38:46.888+0200: 15264.433: [GC (Allocation Failure) 2018-05-30T20:38:46.888+0200: 15264.433: [ParNew: 730708K->78583K(737280K), 0.0952740 secs] 18330855K->18343737K(31375360K), 0.0954785 secs] [Times: user=0.75 sys=0.01, real=0.09 secs] Issue still persists with data nodes.3 out of 8 data nodes are reporting very less number of blocks Please assist.
... View more
05-29-2018
07:39 PM
I am getting a strange issue with 3 out of 8 data nodes in our HDP 2.6.0 cluster. These 3 data nodes are not reporting the correct number of blocks and also not sending the block reports to name node on regular intervals. Ambari reporting : [Alert][datanode_storage] Unable to extract JSON from JMX response
Any suggestion what is wrong with our cluster? Thanks in advance for your assistance.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
01-14-2018
07:30 PM
@Dinesh Das, I am also experiencing the same issue. Could you please let me know how did you resolve this issue? Thanks a lot !
... View more
12-02-2017
02:21 PM
1 Kudo
@Manish Gupta Thank you so much for your response. I managed to resolve this issue by entering the username in upper case and was able to access all the Hive tables based on the policies defined in Ranger. It's strange that when I type username in lowercase ,AD authentication was successful but permissions denied to access the tables. I have attached screenshots of both scenarios. Thanks.hive-ad-issue.png
... View more
11-29-2017
08:33 PM
Hi, When I tried to config hive server 2 authentication with AD. I am getting below error in beeline Beeline version 1.2.1000.2.6.0.3-8 by Apache Hive
beeline> !connect jdbc:hive2://local host:10000
Connecting to jdbc:hive2://local host:10000
Enter username for jdbc:hive2://localhost:10000: XXXX Enter password for jdbc:hive2://feabigrpd01:10000: ********** Connected to: Apache Hive (version 1.2.1000.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://local host:10000> 0: jdbc:hive2://localhost:10000> show databases ; Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user XXXX does not have [USE] privilege on [null] (state=42000,code=40000) 1. I have configured below properties hive.server2.authentication =LDAP hive.server2.authentication.ldap.url=ldap://XXX.co.XX:389 hive.server2.authentication.ldap.Domain=dc=XXX,dc=co,dc=XX 2. hive server2 logs error :/var/log/hive ERROR [HiveServer2-Handler-Pool: Thread-71]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: Error validating the login [Caused by javax.security.sasl.AuthenticationException: LDAP Authentication failed for user [Caused by javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C09042F, comment: AcceptSecurityContext error, data 52e, v2580^@]]]
at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:109) Caused by: javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C09042F, comment: AcceptSecurityContext error, data 52e, v2580^@]
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3135) 3. Ambari Hive view authentication error : Service checks completed HDFS test HiveServer test ATS test User Home Directory test Issues detected Hive authentication failed 4. I have got ranger policy in place which gives the permission to user XXX to all the directories in HDFS & select access to all tables. Please assist me to resolve this issue. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
09-07-2017
10:55 AM
Thanks a lot @Pravin Bhagade for responding to my query. I observed that I missed the first line of the error while pasting it on the portal.We are getting "AD Authentication Failed:org.springframework.security.authentication.BadCredentialsException: Bad credentials" error. We have checked the credential for bind user & credentials are absolutely correct.Apparently, Bind user is not able to access the AD domain server through ranger. we concluded this based on response from AD team who told us that there is no logs for bind user for bad credentials.If particular user enters bad credentials n access the AD server then AD team gets the logs for that particular user.we are using HDP-2.4.3.0. Please shed some light on this issue. Thanks a lot!!
... View more
09-05-2017
01:37 PM
Hi, I m getting below user sync error while integrating AD through Ambari. Please assist resolving the issue.Thanks in advance. AD Authentication Failed:org.springframework.security.authentication.BadCredentialsException:
Bad credentials atorg.springframework.security.ldap.authentication.LdapAuthenticationProvider.doAuthentication(LdapAuthenticationProvider.java:185)atorg.springframework.security.ldap.authentication.AbstractLdapAuthenticationProvider.authenticate(AbstractLdapAuthenticationProvider.java:61)atorg.apache.ranger.security.handler.RangerAuthenticationProvider.getADBindAuthentication(RangerAuthenticationProvider.java:405 Unable to load native-hadoop library for your platform... using builtin-java classes where applicable INFO UnixAuthenticationService [main] - Enabling Protocol: [SSLv2Hello] INFO UnixAuthenticationService [main] - Enabling Protocol: [TLSv1] INFO UnixAuthenticationService [main] - Enabling Protocol: [TLSv1.1] INFO UnixAuthenticationService [main] - Enabling Protocol: [TLSv1.2] ERROR UserGroupSync [UnixUserSyncThread] - Failed to initialize UserGroup source/sink. Will retry after 3600000 milliseconds. Error details:
com.sun.jersey.api.client.UniformInterfaceException: GET Unauthorized
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:686)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:507)uster.
... View more
Labels:
- Labels:
-
Apache Ranger