Member since
01-25-2017
119
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13698 | 04-11-2017 12:36 PM | |
4130 | 01-18-2017 10:36 AM |
03-12-2018
07:49 PM
@Benjamin Leonhardi if I use SQL Authentication using this method, how should I assign passwords to users? For using in DB connection for instance... Will Hive consider OS level user passwords? If so, should I set a password also for 'hive' user? Does it affect other operations?
... View more
12-05-2017
02:24 PM
@Geoffrey Shelton Okot Unfortunately, I couldn't start HDFS services this way, neither. Thank you very much though.
... View more
12-05-2017
08:09 AM
Hi @Geoffrey Shelton Okot, I had tried hadoop namenode -format before but tried again and received the same exception: 17/12/05 09:46:25 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not format one or more JournalNodes. 2 exceptions thrown:
10.0.109.11:8485: Directory /hadoop/hdfs/journal/testnamespace is in an inconsistent state: Can't format the storage directory because the current directory is not empty.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:482)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:558)
at org.apache.hadoop.hdfs.qjournal.server.JNStorage.format(JNStorage.java:185)
at org.apache.hadoop.hdfs.qjournal.server.Journal.format(Journal.java:217)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.format(JournalNodeRpcServer.java:145)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.format(QJournalProtocolServerSideTranslatorPB.java:145)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25419)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347) This time additionally I deleted content of /hadoop/hdfs/journal/testnamespace but nothing changed. Command ended up with the same exception.
... View more
12-04-2017
01:40 PM
So.. What are the steps for reinstall? Is there any way to start from only HDP installation but keeping OS level changes as prerequisite and also ambari installation? Does command ambari-server reset work for that?
... View more
12-04-2017
10:55 AM
If recovery steps will take more than re-install and/or give me an unstable cluster then its better to reinstall. What I anticipate from your answer you mean such costs, right?
... View more
12-01-2017
02:11 PM
Hello, After a mass disk operation on our test environment, we have lost all the data in /data dir which was assigned as storage directory for Zookeeper, Hadoop and Falcon (the list yet we know) Since it was our test cluster, data is not important but I don't want to reinstall all the components. I also want to learn how to recover the cluster running from this state. In /data dir we only have folders but no files. After struggling a little on ZKFailoverController, I was able to start it with -formatZK flag. Now however, I am unable to start namenode(s) getting below exception: 10.0.109.12:8485: Directory /hadoop/hdfs/journal/testnamespace is in an inconsistent state: Can't format the storage directory because the current directory is not empty. I have tried; - removing lost+found folder on mount root, - changing ownership of all folders under /data/hadoop/hdfs to hdfs:hadoop - changing permission of all folders to 777 /data/hadoop/hdfs PS: I have updated ownership of path /hadoop/hdfs/ which contains journal folder and it led me to move one step forward: 17/12/01 14:20:26 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot remove current directory: /data/hadoop/hdfs/namenode/current PS: I have removed contents of /data/hadoop/hdfs/namenode/current and now it keeps checking 8485 ports of all Journal quorum nodes. 17/12/01 16:04:35 INFO ipc.Client: Retrying connect to server: bigdata2/10.0.109.11:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS) and keeps printing below line in hadoop-hdfs-zkfc-bigdata2.out file Proceed formatting /hadoop-ha/testnamespace? (Y or N) Invalid input: Do you have any suggestion? Or should I give up?
... View more
Labels:
- Labels:
-
Apache Hadoop
11-28-2017
01:04 PM
Hello @Aditya Sirna, Thank you, it worked. Cluster is not Kerberized. 1) Value was: "org.apache.zeppelin.notebook.repo.GitNotebookRepo,org.apache.zeppelin.notebook.repo.VFSNotebookRepo" I have added also FileSystemNotebookRepo and after restart it updated the directory with the new notebook. May this requirement be missing in upgrade documentation?
... View more
11-28-2017
11:52 AM
After HDP 2.6.3 upgrade, I expect seeing Zeppelin starts updating its HDFS directory with new notes created since I was noticed in upgrade doc: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-upgrade/content/upgrading_HDP_prerequisites.html However, I don't see files or folders modified dates updated either new was created in HDFS. I can't see any errors related to HDFS in Zeppelin log. HDFS log also does not contain any new lines for Zeppelin. What I remember is I forgot copying notebook folder in HDFS in prerequisites step but did it after upgrade was completed. Then i restarted Zeppelin. Do you have any idea about how I can make it write on HDFS? Thanks in advance...
... View more
Labels:
- Labels:
-
Apache Zeppelin
11-21-2017
11:52 AM
1 Kudo
If it is only one datanode failing but not all of them, it may be failing due to a volume failure. You can check Namenode web-ui to see if you are facing any volume failures: http://<active-namenode-host>:50070/dfshealth.html#tab-datanode-volume-failures or just http://<active-namenode-host>:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystemState For plain JMX data to see whether or not you're having volume failures (it is not physical disk error all the time. It may occur after logical write issues).
... View more
11-14-2017
02:29 PM
Right, @Sedat Kestepe
... View more