Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4870 | 04-26-2020 06:18 PM | |
3973 | 04-26-2020 06:05 PM | |
3211 | 04-13-2020 08:53 PM | |
4906 | 03-31-2020 02:10 AM |
01-30-2020
08:04 PM
Dear Jay finally we found the issue it was about the mistake in /etc/hosts file instead of 127.0.0.1 , it was the ip address - 27.0.0.1 so we fix it and restart the postgresql and ambari now all are fine
... View more
01-30-2020
05:49 PM
@Strabelli As you see the message in the browser as following which is very generic error and does not tell the actual cause of the failure. MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException Exception thrown when executing query) - So did you check the Hue Log to see if it is showing a detailed StackTrace of the error? Just tail the Hue logs and then try running the same query again to see if we get a detailed error message. # ls -lart /var/log/hue/
# tail -f /var/log/hue/* Also can you check the HIve Service / Metastore log of the same time stamp to verify if the Metastore is running fine without any error?
... View more
01-30-2020
05:02 AM
@AarifAkhter While setting up your maria DB have you performed the step mentioned in the doc as https://docs.cloudera.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-administration/content/using_ambari_with_mysql.html You must pre-load the Ambari database schema into your MySQL database using the schema script. Run the script in the same location where you find the Ambari-DDL-MySQL-CREATE.sql file. You should find the Ambari-DDL-MySQL-CREATE.sql file in the /var/lib/ambari-server/resources/ directory of the Ambari Server host, after you have installed Ambari Server. Ambari also shows this kind of message whenuser performs "ambari-server setup" while setting up the database. WARNING: Before starting Ambari Server, you must run the following DDL directly from the database shell to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql . Also please let us know if your MariaDB instance running on "localhost" Or on "ip-172-31-9-188.xxxxxxxxxxxxxxx.internal" Can you please change the "localhost" to the hostname of the DB. server.jdbc.hostname=ip-172-31-9-188.xxxxxxxxxxxxxxx.internal Also please verify if the "MariaDB" exist on the mariaDB running on host 'ip-172-31-9-188.xxxxxxxxxxxxxxx.internal' # mysql -u ambari -p
Enter password:
show databases; .
... View more
01-28-2020
02:00 PM
Jay - can you help me with this post - https://community.cloudera.com/t5/Support-Questions/how-to-recover-bad-namenode-from-good-namenode/td-p/288471
... View more
01-22-2020
06:06 AM
SHORT Cloudera has broken zookeeper 3.4.5-cdh5.4.0 in several places. Service is working but CLI is dead. No workaround other than rollback. LONG Assign a bounty on this ;-). I have stepped on this mine too and was angry enough to find the reason: Zookeeper checks JLine during ZooKeeperMain.run(). There is a try-catch block that loads a number of classes. Any exception during class loading fails the whole block and JLine support is reported to be disabled. But here is why this happens with CDH 5.4.0: Current opensource Zookeeper-3.4.6 works against jline-0.9.94. Has no such issue. In CDH 5.4 Cloudera has applied the following patch: roman@node4:$ diff zookeeper-3.4.5-cdh5.3.3/src/java/main/org/apache/zookeeper/ZooKeeperMain.java zookeeper-3.4.5-cdh5.4.0/src/java/main/org/apache/zookeeper/ZooKeeperMain.java
305,306c305,306
< Class consoleC = Class.forName("jline.ConsoleReader");
< Class completorC =
---
> Class consoleC = Class.forName("jline.ConsoleReader");
> Class completorC =
316,317c316,317
< Method addCompletor = consoleC.getMethod("addCompletor",
< Class.forName("jline.Completor"));
---
> Method addCompletor = consoleC.getMethod("addCompleter",
> Class.forName("jline.console.completer.Completer"));
CDH 5.4 uses jline-2.11.jar for ZooKeeper and it has no jline.ConsoleReader class (from 2.11 it is jline.console.ConsoleReader). Jline 0.9.94 in turn has no jline.console.completer.Completer. So there is incompatibility with any existing JLine. Any Cloudera CDH 5.4 user can run zookeeper-client on his/her cluster and find it does not work. Open-source zookeeper-3.4.6 depends on jline-0.9.94 which has no such patches. Don't know why Cloudera engineers have done such a mine. I see no clean way to fix it with 3.4.5-cdh5.4.0. I stayed with 3.4.5-cdh5.3.3 dependency where I need CLI and have production clusters. It seemed to me both jline-0.9.94.jar and jline.2.11.jar in classpath for zookeeper will fix the problem. But just have found Cloudera made another 'fix' in ZK for CDH 5.4.0, they have renamed org.apache.zookeeper.JLineZNodeCompletor class to org.apache.zookeeper.JLineZNodeCompleter. But here is the code from ZooKeeperMain.java Class<?> completorC = Class.forName("org.apache.zookeeper.JLineZNodeCompletor"); And of course, it means practically it is not possible to start ZK CLI in CDH 5.4.0 proper way. Awful work. 😞
... View more
01-21-2020
06:04 AM
Hey @jsensharma , Thanks for providing patch for this issue. I tested patch from https://github.com/apache/ambari/pull/3125/commits/973bb3fafdfb3c8e1f8516ca7a6efbb27897fb11 and it fix issue for yarn container metric. But above accepted solution does not work for "yarn container" metrics. Trick is to find correct counterOrNA function as this function is available at 4 places in app.js file . Just thought to let you and others know. Regards Ajit Mote
... View more
01-14-2020
01:31 PM
@nk_11 There are some recommendations from the HDFS Balancer perspective to make sure it runs fast with max performance. Like some of the parameters described in the link as : "dfs.datanode.balance.max.concurrent.moves", "dfs.balancer.max-size-to-move", "dfs.balancer.moverThreads" and "dfs.datanode.balance.max.bandwidthPerSec" https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/data-storage/content/recommended_configurations_for_the__balancer.html . . Regarding the YARN "local-dirs" heavy usage, Please refer to the following article which might give a better idea. You can also refer to the following yarn-site properties to get it better tuned. The "yarn.nodemanager.local-dirs" is the property that points to the location where the intermediate data (temporary data) is written on the nodes where the NodeManager runs. The NodeManager service runs on all worker nodes. Please check if this dir has enough space. The "yarn.nodemanager.localizer.cache.target-size-mb" property defines decides the maximum disk space to be used for localizing resources. Once the total disk size of the cache exceeds the value defined in this property the deletion service will try to remove files which are not used by any running containers. The "yarn.nodemanager.localizer.cache.cleanup.interval-ms": defines this interval for the delete the unused resources if total cache size exceeds the configured max-size. Unused resources are those resources which are not referenced by any running container. https://community.cloudera.com/t5/Community-Articles/How-to-clear-local-file-cache-and-user-cache-for-yarn/ta-p/245160
... View more
01-13-2020
10:35 PM
It works perfectly after changing the mentioned configurations. Thanks a lot.
... View more
01-08-2020
12:59 PM
Auch... Good catch. wget https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/cloudera-cdh5.repo wget https://archive.cloudera.com/cm5/redhat/6/x86_64/cm/cloudera-manager.repo wget https://archive.cloudera.com/cm5/redhat/6/x86_64/cm/RPM-GPG-KEY-cloudera Thanks for correcting. I should have updated 6 to 7 in the above link while I was doing all downloads... no harm, it's a new installation, I can clean up and redo. Thanks for your help.
... View more