Member since
07-21-2017
22
Posts
0
Kudos Received
0
Solutions
07-27-2017
10:41 AM
@Santhosh B Gowda thank you! the name works. but the command i didn't work through. does the url equals https:// or service host>:<ambari port or component port>??? if it is ambari:ambari i got "curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received."
... View more
07-27-2017
10:13 AM
i am setting component recovery by curl -u admin:admin -H "X-Requested-By: ambari" -X PUT 'http://10.90.3.101:8080/api/v1/clusters/hdp_cluster/components?ServiceComponentInfo/component_name.in(APP_TIMELINE_SERVER,DATANODE,HBASE_MASTER,HBASE_REGIONSERVER,HISTORYSERVER,HIVE_METASTORE,HIVE_SERVER,MYSQL_SERVER,NAMENODE,NODEMANAGER,RESOURCEMANAGER,SECONDARY_NAMENODE,WEBHCAT_SERVER,ZOOKEEPER_SERVER)' -d '{"ServiceComponentInfo" : {"recovery_enabled":"true"}}' but it seems the spark history serve doesn't match SPARK_HISTORY_SERVER or SPARK_HISTORY or SPARK_SERVER what is its name? or how can i get all the COMPONENTS'NAME?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Spark
07-14-2017
08:41 AM
map.memory = 2G map.opt=2048M io.sort.mb=2047M will cause ERROR JAVA HEAP SIZE map.memory = 2G map.opt=3096M io.sort.mb=2047M is fine; why?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
07-09-2017
03:37 AM
i create a new client host outside the cluster by copying directory hdp & current & /etc/<conf> of one of the cluster's host i can run hive but when i run hive through different users at the same time,there is only one can be running ,others will wait until the running one finish. how to change it?
... View more
Labels:
- Labels:
-
Apache Hive
06-26-2017
01:34 AM
@Jay SenSharmathanks.but i mean why i need to do this on all hosts while using default choice don't.And i still can't find jdk on hosts by using "find / -name '*jdk* -type d'"
... View more
06-24-2017
08:21 AM
i find out that if i setup ambari with "ambari-server setup -j /path/to/your/installed/jdk". when i come to the confirm hosts step, i get warning : there are JDK issues on the hosts but when i change to configurate jdk with downloading from hortonworks,there is no issues,but i still can't find jdk on the hosts.why? please tell me how it works.
... View more
Labels:
- Labels:
-
Apache Ambari
06-06-2017
06:10 AM
@nshelke ambari-metrics-collector.txt
... View more
06-06-2017
05:57 AM
@Jay SenSharma sorry i send you the ambari-metrics-collector.out before ambari-metrics-collector.txt # df -h Filesystem Size Used Avail Use% Mounted on tank/containers/xdata-0 1.6T 12G 1.6T 1% / none 492K 4.0K 488K 1% /dev udev 126G 0 126G 0% /dev/tty /dev/md0 92G 22G 66G 25% /dev/lxd none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 26G 1.1M 26G 1% /run none 5.0M 0 5.0M 0% /run/lock none 126G 0 126G 0% /run/shm none 100M 0 100M 0% /run/user #du 3750 . by the way, my another cluster shows that message too,but it works well = =
... View more
06-06-2017
05:54 AM
@nshelke thank a lot! please check the message above
... View more
06-06-2017
04:05 AM
@Jay SenSharma
@Jay SenSharma ambari-metrics-collector.log: Exception in thread Thread-947:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 1082, in run
self.function(*self.args, **self.kwargs) File "/usr/lib/python2.6/site-packages/resource_monitoring/core/metric_collector.py", line 45, in process_event
self.process_host_collection_event(event) File "/usr/lib/python2.6/site-packages/resource_monitoring/core/metric_collector.py", line 79, in process_host_collection_event
metrics.update(self.host_info.get_disk_io_counters())
File "/usr/lib/python2.6/site-packages/resource_monitoring/core/host_info.py", line 265, in get_disk_io_counters
io_counters = psutil.disk_io_counters()
File "/usr/lib/python2.6/site-packages/resource_monitoring/psutil/build/lib.linux-x86_64-2.7/psutil/__init__.py", line 1726, in disk_io_counters
raise RuntimeError("couldn't find any physical disk") RuntimeError: couldn't find any physical disk thank you very much
... View more