Member since
01-08-2014
88
Posts
15
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5784 | 10-29-2015 10:12 AM | |
5954 | 11-27-2014 11:02 AM | |
5842 | 11-03-2014 01:49 PM | |
3278 | 09-30-2014 11:26 AM | |
8515 | 09-21-2014 11:24 AM |
09-21-2014
11:24 AM
1 Kudo
Hi! The problem you are getting is a known limitation of Accumulo on small clusters. By default Accumulo attempts to use a replication factor of 5 for the metadata table, ignoring the "table.file.replication" setting. Normally, Cloudera Manager does not set a max replication factor. This causes under-replication warnings until you can correct either the number of nodes or manually adjust the replication setting on that table. In your cluster, it appears the "dfs.replication.max" setting has been adjusted to match your number of cluster nodes. This is causing Accumulo's attempts to create new files for its internal tables to fail. Unfortunately, I'm not sure this can be fixed without data loss. However, to recover you should first edit the "dfs.replication.max" setting for HDFS to be >= 5. Then you should adjust the replication on the metadata and root tables to be <= your number of DataNodes. After that it should be safe to lower dfs.replication.max again. Adjust the replication in the accumulo shell: $> config -t accumulo.metadata -s table.file.replication=3
$> config -t accumulo.root -s table.file.replication=3
... View more
09-21-2014
11:08 AM
To make sure we have the same context, I think you're working through the bulk ingest overview's example. Please correct me if that's wrong. Before running any of the accumulo examples, you need to do some user set up. None of them should be run as root nor any of the service principles (accumulo, hdfs, etc). The user that will run the data generation needs to be able to run MapReduce jobs. See the full docs for instructions on provisioning such a user. In short, ensure they have a user account on all worker nodes and that they have a user directory in HDFS (creating said home directory will require action as the hdfs super user). The user you created above will be used for the data generation step. If you are running on a secure cluster, you will need to use your kerberos password before submitting the job. Otherwise the generation step only requires an initial local login. The data loading step requires an Accumulo user. You should create a user via the Accumulo shell. Be sure to replace the instance name, zookeeper servers, and user/password given in the ARGS line with ones appropriate for your cluster. This loading should not be done as the Accumulo root user. Let me know if you have any further problems.
... View more
06-26-2014
09:58 AM
Nope, no setting. It should *just work*. Something is amiss with your browser, just not sure what. Is this a machine you control yourself or is it managed by an IT group? Can you try copy/pasting the link into your browser address bar instead of clicking on it? Is there another web browser on the machine you could attemp to use? (It's worth noting that hte CM5 requirements state the minimum Firefox version is 11, but that shouldn't impact the NameNode UI page and this doesn't feel like a browser compatibility issue.)
... View more
06-26-2014
09:48 AM
Curl works, so that's good news. DNS at least works at teh OS level. The current problem appears to be with your web browser then. What browser is it? Do you know if it's running any kind of filtering add on?
... View more
06-26-2014
09:33 AM
is the single system your local workstation? is it a VM? check via curl to rule out your browser. recheck that DNS resolution works for hte hostname CM thinks the node should be.
... View more
06-26-2014
09:27 AM
1 Kudo
- Can you verify that DNS works from your workstation to resolve the host that is running the namenode you're trying to look at? - The message sounds like a firewall issue. Are you sure there isn't a firewall between you and the namenode machine preventing access? One way to zero in on the problem is to copy the namenode ui link from CM adn then check it from the machine running the namenode to rule out network interference. For example, if the namenode UI link is http://namenode1.example.com:50070/ [root@namenode1 ~]# curl http://namenode1.example.com:50070/
<meta HTTP-EQUIV="REFRESH" content="0;url=dfshealth.jsp"/>
<html>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<head>
<title>Hadoop Administration</title>
</head>
<body>
<h1>Hadoop Administration</h1>
<ul>
<li><a href="dfshealth.jsp">DFS Health/Status</a></li>
</ul>
</body>
</html>
[root@namenode1 ~]#
... View more
06-26-2014
08:51 AM
Which version of CM are you using? In general, the link is on the HDFS service page on the top line and in the "HDFS Summary" section. On CM 5.0.2 the latter is right under the "configured capacity" display and is labeled "namenode web ui." Circled in red: .
... View more
06-26-2014
08:28 AM
1 Kudo
Does the CM page for the HDFS service report everything as healthy? Can you try browsing to the NameNode UI via the link provided on the CM page for the HDFS service?
... View more
06-25-2014
11:29 PM
I'd recommend just going through the upgrade. CDH4 -> CDH5 is relatively painless, especially if you're doing new development. My guess is it will take you longer to get through patching and building than it takes to upgrade.
... View more