Member since
08-24-2016
27
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
10273 | 06-27-2018 04:59 PM | |
4578 | 12-02-2017 02:21 PM |
08-30-2018
08:23 AM
@Sindhu Thank you!
... View more
08-28-2018
12:26 PM
Hi There, I am trying to import data from Informix non transactional database to HDFS but getting below error..Our database is defined as non transactional. Is there any way by which we can import data from non transactional DB through sqoop ? I know this issue has already been raised below. https://issues.apache.org/jira/browse/SQOOP-2951 sqoop list-tables --connect jdbc:informix-sqli://XXX:1530/XXX:INFORMIXSERVER=XXX --driver com.informix.jdbc.IfxDriver --username XXX --P
ERROR manager.SqlManager: Error reading database metadata: java.sql.SQLException: No Transaction Isolation on non-logging db's
java.sql.SQLException: No Transaction Isolation on non-logging db's
at com.informix.util.IfxErrMsg.getSQLException(IfxErrMsg.java:373)
at com.informix.jdbc.IfxSqliConnect.setTransactionIsolation(IfxSqliConnect.java:2438)
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:910)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.listTables(SqlManager.java:539)
at org.apache.sqoop.tool.ListTablesTool.run(ListTablesTool.java:49)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Could not retrieve tables list from server
... View more
Labels:
- Labels:
-
Apache Sqoop
06-27-2018
04:59 PM
@Geoffrey Shelton Okot, Today, I have managed to format namenode successfully. One of the journal node's meta data was not in sync with other two journal nodes. That 's the reason problematic journal node was getting locked every time when I was trying to format the name node. After copying data from good journal node's directory to problematic journal node 's directory,allowed me to format namenode. I also deleted in_use.lock files from all 3 journal nodes before executing hdfs namenode -format command. Thank you so much for your assistance on this.
... View more
06-26-2018
08:26 PM
@Geoffrey Shelton Okot, I have got 3 journal nodes and 3 zookeepers. I have removed the locks file & ran hdfs namenode -format. I noticed that in_use.lock file is being created by the namenode -format command.
... View more
06-26-2018
07:08 PM
Thank you so much @Geoffrey Shelton Okot for your assistance as always. I have restarted journal nodes but it did not help. Would it be nice if I delete all the the journal nodes and re add them again or completely delete hdfs services and install namenode and datanodes from scratch? Please advise if you have any other better way to solve this issue?
... View more
06-26-2018
03:53 PM
Hi All, We were experiencing issue with 4 of data nodes which were not sending the block reports to name node. In order to resolve that issue, we have formatted all the data node dirs for all the data nodes and decommissioned and recommissioned the data nodes.Also deleted all the data from hdfs. when I am trying to format the name node. I am getting below error. 18/06/26 16:32:05 WARN namenode.NameNode: Encountered exception during format:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
10.217.99.13:8485: Cannot lock storage /hadoop/hdfs/journal/HDPDRHA. The directory is already locked
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:743)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:551)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:502)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.analyzeAndRecoverStorage(JNStorage.java:227)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.<init>(JNStorage.java:76)
at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:143)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:99)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.isFormatted(JournalNodeRpcServer.java:120)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.isFormatted(QJournalProtocolServerSideTranslatorPB.java:103)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:965)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:179)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1631)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
18/06/26 16:32:05 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 1 exceptions thrown:
10.217.99.13:8485: Cannot lock storage /hadoop/hdfs/journal/HDPDRHA. The directory is already locked
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:743)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:551)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:502)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.analyzeAndRecoverStorage(JNStorage.java:227)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.<init>(JNStorage.java:76)
at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:143)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
at org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:99)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.isFormatted(JournalNodeRpcServer.java:120)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.isFormatted(QJournalProtocolServerSideTranslatorPB.java:103)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:965)
at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:179)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1631)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
18/06/26 16:32:05 INFO util.ExitUtil: Exiting with status 1
18/06/26 16:32:05 INFO namenode.NameNode: SHUTDOWN_MSG:
I stopped both name nodes ,only journal nodes are online when executing below command. hadoop namenode -format Any idea why I am getting above error?.. Thank you so much for your assistance on this.
... View more
Labels:
- Labels:
-
Apache Hadoop
06-09-2018
09:56 PM
Thanks a lot @Shu, I am getting below error now curl: (35) SSL connect error
are you aware about error?
... View more
06-09-2018
09:08 PM
Hi There, I am getting below error on my solr web UI on HDP 2.6.0 Cluster. org.apache.solr.common.SolrException: Exception writing document id fb076ca8-9261-4715-990f-f563da0a02ed-1857434654 to the index; possible analysis error: number of documents in the index cannot exceed 2147483519
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:173)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory so I tried to delete all solr indexes using below command curl http://LocalHost:8886/solr/update?commit=true -H "Content-type: text/xml"--data-binary '<delete><query>*:*</query></delete>'
but unable to delete the indexes and getting below error url: (6) Couldn't resolve host '<delete><query>*:*< Is there any alternative way to delete solr indexes completely or can we define the TTL to delete the solr indexes/data at a regular interval basis? Please assist. Thanks!
... View more
Labels:
- Labels:
-
Apache Ranger
-
Apache Solr
06-06-2018
01:00 PM
@Geoffrey Shelton Okot ,Thank you so much for getting back to me. We don't have rack awareness enabled on our DR cluster as it's 8 data nodes cluster only. we do have rack awareness in our production cluster. We can enable rack awareness later but my first priority is to get back the blocks on data nodes as faulty data nodes are not sending any block report to name node. Here is current status as of today. I am still getting the EOFException error on problematic data nodes other data nodes are not giving this error. I checked with our network team & they said all the data nodes are connected to same NIC and there is no packet loss. Hardware team found some correctable memory errors but nothing major. Is there any maximum number of blocks retention limits for a particular data node? I meant that is there any possibility that max. number of blocks retention limit has been exceeded for problematic data nodes & because of that they stopped sending the block report to name node due to some capacity/resource constraints? Please guide.Do I need to report this as a bug to apache foundation? java.io.EOFException: End of File Exception between local host is: "DATANODE HOST"; destination host is: "NAMENDOE HOST":8020; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy15.blockReport(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:211)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:374)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:645)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:785)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1119)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1014)
... View more
06-01-2018
06:03 AM
@Geoffrey Shelton Okot Yes, I have been through the post mentioned by you. We had data nodes failure issues in past ,increase heap size fixed it but I will fine tune them. Below is heap utilization for data node (max heap 30 GB). High heap usage data nodes (marked in red) are the problematic ones. Hadoop env SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=800m -XX:MaxNewSize=800m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=ERROR,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC "
export HADOOP_SECONDARYNAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\" ${HADOOP_SECONDARYNAMENODE_OPTS}"
You mentioned "A GC allocation failure means that the garbage collector could not move objects from young gen to old gen fast enough because it does not have enough memory in old gen. which parameter holds values for old gen? we have got 8 data nodes, CPU 2*8 ,memory 256 GB, Disk -12*6 =72 TB 8 hosts of 72 TB each = 576 TB.
our cluster Blocksize=128 MB, Replication=3 Cluster capacity in MB: 8* 72,000,000 MB = 576,000,000 MB (576TB) Disk space needed per block: 128 MB per block * 3 = 384 MB storage per block Cluster capacity in blocks: 576,000,000 MB / 384 MB = 1,500,000 blocks But ambari is reporting 156,710872 blocks, am I missing something here? Await for your response. Thank you so much!
... View more