Member since
12-27-2016
156
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1052 | 07-02-2018 11:52 AM |
05-20-2018
05:37 AM
@Geoffrey, I thank a lot for your time on this and I could do it on my test server as you had done but unfortunately I could not do it on my pre-prod cluster and I do see the same that / has the same privileges as you have but I am unable to create znode 😞 Any logs or any configurations you would like to cross-check?
... View more
05-19-2018
05:02 PM
By default / has world wide permissions but I am unable to create znode under /. By the way it is kerberised cluster. And I tried the solution provided by @Harald Berghoff and unfortunately in my case it did not work. My problem in one line is: / in zookeeper has worldwide permissions but I am unable to create znode under / @Geoffrey... Can you help me on this.
... View more
05-16-2018
11:35 AM
Hi, / has full permissions but I could not create znode. Any help and I tried working on steps mentioned in the Web link but it did not work. , Implemented it. But it did not work.
... View more
05-16-2018
02:09 AM
Hi Storm job is failing with below exception: :cause KeeperErrorCode = NoAuth for /credentials/rrtd-topology-1-1526434979
:via
[{:type java.lang.RuntimeException
:message org.apache.storm.shade.org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /credentials/rrtd-topology-1-1526434979
:at [org.apache.storm.util$wrap_in_runtime invoke util.clj 54]}
{:type org.apache.storm.shade.org.apache.zookeeper.KeeperException$NoAuthException
:message KeeperErrorCode = NoAuth for /credentials/rrtd-topology-1-1526434979
:at [org.apache.storm.shade.org.apache.zookeeper.KeeperException create KeeperException.java 113]}]
:trace
[[org.apache.storm.shade.org.apache.zookeeper.KeeperException create KeeperException.java 113]
[org.apache.storm.shade.org.apache.zookeeper.KeeperException create KeeperException.java 51]
[org.apache.storm.shade.org.apache.zookeeper.ZooKeeper create ZooKeeper.java 783]
[org.apache.storm.shade.org.apache.curator.framework.imps.CreateBuilderImpl$11 call CreateB What did I do? I tried to create znode /credentials/rrtd-topology-1-1526434979 but znode is not created and instead I cannot create any znode in zookeeper and create /test is also failing. Below is the ACL for / in zookeeper: [zk: localhost:2181(CONNECTED) 26] getAcl /
'world,'anyone
: cdrwa Can someone please help me.
... View more
Labels:
- Labels:
-
Apache Storm
04-29-2018
05:41 AM
@Geoffery, Thanks a lot for your time. Yes, you are correct and I am looking for a tool other than distcp Thanks a lot for your time on this again.
... View more
04-27-2018
10:17 AM
@Geoffrey Shelton Okot, Thanks for your time. I agree with your point about changing the block size on cluster level and restarting the services but the new block size would be applicable only for new files and the command you gave is applicable for new files. I would like to know the method other than distcp to change block size of existing files in hadoop cluster.
... View more
04-27-2018
01:31 AM
@aengineer Thanks for your time on this. I want to increase the block size of existing files and this is a requirement for us. This is to decrease the latency while reading the file.
... View more
04-26-2018
01:18 PM
1 Kudo
@Felix Albani, Thanks a lot for your time on this. Can you verify on this distcp procedure: a) Use distcp and copy all the files and subfolders with -p option to a temporary location in HDFS on the same cluster with new block size. b) Remove all the files in original location. c) Copy the files from temporary location to original location. Am I correct?
... View more
04-26-2018
10:40 AM
Hi, I tried looking in to the community but could not get proper answer for this question. How can I change the block size for the existing files in HDFS? I want to increase the block size. I see the solution as distcp and I understood that we have to use distcp to move the files, folders and subfolders to a new temporary location with new block size and then remove the files, folders, etc for which block size has to be increased and copy the files from temporary location back to original location. The above methodology might have side effects such as overhead of HDFS by adding duplicate copies of the files and change in permissions while copying files from temporary location and etc. Is they any way which is efficient enough to replace the existing files with the same name and same privileges but with increased block size? Thanks to all for your time on this question.
... View more
Labels:
- Labels:
-
Apache Hadoop
04-20-2018
08:48 AM
@Harald, thanks for your kind attention on this case. Yes, I am being forced to restart solr service. How can I change solr from using a ticket to using a keytab? I want my long running solr process not to be interrupted because of this expiry period. Any help on this?
... View more
- « Previous
- Next »