Member since
12-21-2015
9
Posts
1
Kudos Received
0
Solutions
11-30-2016
08:41 PM
Thank you. This worked out just need a little sql know how to handle the serviceconfigmapping. Thanks again for the quick response!
... View more
11-30-2016
02:53 PM
Recently we have been dealing with some server issues I believe we may have lost power at some point. Upon starting the ambari-server I am met with the "DB configs consistency check failed. Run "ambari-server start --skip-database-check" to skip" message. The message from /var/log/ambari-server/ambari-server-check-database.log is: 2016-11-30 09:25:53,306 INFO - ******************************* Check database started ******************************* 2016-11-30 09:25:57,570 INFO - Checking for configs not mapped to any cluster 2016-11-30 09:25:57,593 INFO - Checking for configs selected more than once 2016-11-30 09:25:57,595 INFO - Checking for hosts without state 2016-11-30 09:25:57,596 INFO - Checking host component states count equals host component desired states count 2016-11-30 09:25:57,598 INFO - Checking services and their configs 2016-11-30 09:26:02,972 ERROR - You have non selected configs: ams-hbase-log4j,ams-hbase-security-site,ams-hbase-policy,ams-log4j for service AMBARI_METRICS from cluster slush! 2016-11-30 09:26:02,973 ERROR - You have non selected configs: zeppelin-config,zeppelin-ambari-config for service ZEPPELIN from cluster slush! 2016-11-30 09:26:02,973 INFO - ******************************* Check database completed ******************************* 2016-11-30 09:26:12,880 INFO - Checking DB store version 2016-11-30 09:26:13,360 INFO - DB store version is compatible I have started ambari-server with the "--skip-database-check" option however upon start up no services are show to be in the cluster. I have tried re-adding the services but end up with a server error and stopping at "Preparing to deploy: 18 of 18 tasks completed. Any advice or path forward would be appreciated.
... View more
Labels:
- Labels:
-
Apache Ambari
12-28-2015
07:29 PM
Sorry I haven't responded yet been out of the office with the holidays. From what I can tell the reduce memory is set to 5GB. I am unsure about the number of reduces. We have an 8 node cluster each node has 16 cores and 192 GB of RAM.
... View more
12-21-2015
07:22 PM
1 Kudo
We are currently trying to use the phoenix csv bulk loader mapreduce tool. It is taking about a hour and a half for a 170 GB csv. The map is usally done quickly but the reduce seems to be taking much longer than it should. I am believe the fact we are utilizing a 1 Gb is a contributing factor to this. We have some old 10 Gb infiniband equipment laying around and I was considering trying to implement this as the backbone of HDFS and MapReduce. I have come across two articles mentioning multihoming, neither of which I believe gives me enough detail to solve this problem. Any documentation or direction is greatly appreciated.
... View more
Labels:
- Labels:
-
Apache Hadoop