Member since
02-18-2016
135
Posts
19
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1829 | 12-18-2019 07:44 PM | |
1859 | 12-15-2019 07:40 PM | |
748 | 12-03-2019 06:29 AM | |
767 | 12-02-2019 06:47 AM | |
1730 | 11-28-2019 02:06 AM |
11-28-2019
02:06 AM
Hi @Manoj690 Can you copy - "/var/lib/ambari-server/resources/mysql-connector-java.jar" to "/usr/share/java/" and retry. Make sure you use correct java path for the java version you are pointing too.
... View more
11-28-2019
12:50 AM
Hi @Manoj690 Can you share output of below commands - $ ls -ltr /usr/share/java/mysql.jar $find / -name mysql.jar $find / -name mysql-connector-java*.jar
... View more
11-28-2019
12:47 AM
1 Kudo
Hi @laplacesdemon I agree with you and definately application/third party tools/components must be installed outside cluster or on individual new node to avoid major performance impacts. Regarding on how to manage the components if the hadoop version changes is pretty kind of devops question i feel. You always need to make some inventory of applications running along with your ecosystem components and their dependencies. Nearby you can use Nexus as centralized repository to fetch new versions which needs to be deployed on your application side[ie. Oracle Data Integrator and Jupyter Hub] with help of jenkins/some deployment tool. As per my experience i see resource related problems in case you think of installing application on edge nodes. So i will suggest that is not a good idea. Do revert if you have further points to highlight.
... View more
11-27-2019
06:48 PM
Hi @hdpmx My suggestion is to have always master components to be places on master nodes and not with worker nodes [like - datanodes,nodemanager, kafka brokers,etc..] Also i will suggest you to refer basic requirements for oozie service which will help to plan master workload accordingly. Please refer link - https://docs.cloudera.com/documentation/enterprise/release-notes/topics/hardware_requirements_guide.html#concept_ukj_yn1_jbb
... View more
11-27-2019
06:36 PM
Hi @baggysaggyDHS Yes, definitely you can view this info in your login portal. Please login to https://sso.cloudera.com/ Your company must have already got this credentials. If NOT you can register on the link and pass official address which is register with the account and you will receive password reset link. Navigate to - https://my.cloudera.com/account/applications where you can see list of all Active applications you have support for. For details regarding supported servers/cluster please open a support case from the portal If anyone from your team in past has already registered environment details then you can see the details of supported cluster in "Assets" sectionwhile you create new case as shown below - Hope that helps
... View more
11-27-2019
06:24 PM
Hi @Argin We will be happy to address your concern and guide you to resolve your issue.But we would like to ask if any specific reason to do unsubscribe? Apart - you can follow below steps to unsubscribe your account - You might be receiving emails to the "email id" which you have registered for your account. If you open any one of the email then at the bottom you can see as below - Click on "Manage your subscriptions" and it will take you to the other screen below - Click on "check all" and "Delete Selected subscription" if you do not want to receive any email Also click on "Notifications" tab and take appropriate actions as below - Finally click on "PERSONAL" and check the box as below -
... View more
11-26-2019
10:50 PM
Hi @Manoj690 BTW is the schema missing? Why are you creating it manually ? Anyhow You can run this via mysql cli as below - Login to mysql using Ambari user and password create/change DB Run the command Please check the below link for details - https://community.cloudera.com/t5/Support-Questions/Ambari-Mysql-setup/td-p/116774
... View more
11-26-2019
08:18 PM
Hi @China Cloudera CDH6.x version is production ready. But before you proceed to deploy for production do check version specific release notes [so make sure it meet as per your requirements] for "Known Issues and Limitations" notes. Below are links highlighted - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_requirements_supported_versions.html CDH6.0.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_610_known_issues.html CDH6.1.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_61_release_notes.html#cdh61x_release_notes CDH6.2.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_62_release_notes.html#cdh62x_release_notes CDH6.3.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_63_release_notes.html#cdh63x_release_notes
... View more
11-26-2019
07:25 PM
Hi @rvillanueva As highlighted by you both screenshots/settings highlighted for AD/LDAP within Ranger differs. Please check below - Ranger Authentication For WebUI : The above screenshot describes how to configure the authentication method that determines who is allowed to login to the "Ranger web interface". So if you integrate Ranger with either LDAP/AD then users are LDAP or AD can be used to login to Ranger WebUI with respective credentials. The setting are configured via Ambari as below - Ambari Login->Services->Ranger->Configs->Advance->" Ranger Settings " Ranger Authentication for UNIX: The above setting configure Ranger to use Unix for user authentication. Which means user integrated from AD/LDAP can be configured within new/existing policies [within existing repositories created eg. HDFS, YARN] and access policies can be defined for those users as shown in screenshot below - If the AD/LDAP is not integrated for Ranger UNIX authentication the user will not be fetch/displayed in above "select user". This settings are configured as - Ambari Login->Services->Ranger->Configs->"Ranger User Info"" Let me know if that clears the difference.
... View more
11-26-2019
06:41 PM
Hi @Manoj690 Please ignore previous command. Can you confirm if the znode for hiveserver is created in zookeeper? Please run the below command from Ambari server node - /var/lib/ambari-server/ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' Check for ambari server and hiveserver logs for any error. please pass the latest error here. Make sure permission on below dir are correct - # ls -ld /var/run/hive/
drwxr-xr-x 2 hive hadoop 60 Nov 20 07:18 /var/run/hive/
... View more
11-26-2019
07:37 AM
@ManuelCalvo we already checked from network side, and they mentioned no issue. how to debug this issue? currently we enable debug for worker.logs and relaunched topologies. Any more suggestions?
... View more
11-26-2019
02:39 AM
Hi @Manoj690 Can you just give a retry and check if it works? If not the login to Zookeeper from cli and check if znode for hiveserver2 is created or not. eg. /usr/hdp/<hadoop_version>/kafka/bin/kafka-topics.sh --zookeeper <ZK-server>
>ls / If the znode is not created then please try running below command once - /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://gaian-lap386.com/ambari1' ambari1 [PROTECTED] com.mysql.jdbc.Driver'
You might need to modify password in above command removing "PROTECTED"
... View more
11-25-2019
10:45 PM
Hi @ManuelCalvo Below is the output - [Note: for security reason i modified topic name below to test_c1] Describe output -
Topic:test_c1_prv_input_client_instruction PartitionCount:8 ReplicationFactor:4 Configs:retention.ms=604800000,retention.bytes=1073741824
Topic: test_c1_prv_input_client_instruction Partition: 0 Leader: 1001 Replicas: 1001,1003,1006,1004 Isr: 1003,1006,1004,1001
Topic: test_c1_prv_input_client_instruction Partition: 1 Leader: 1005 Replicas: 1005,1006,1004,1001 Isr: 1005,1006,1001,1004
Topic: test_c1_prv_input_client_instruction Partition: 2 Leader: 1007 Replicas: 1007,1004,1001,1005 Isr: 1007,1004,1001,1005
Topic: test_c1_prv_input_client_instruction Partition: 3 Leader: 1008 Replicas: 1008,1001,1005,1007 Isr: 1008,1007,1001,1005
Topic: test_c1_prv_input_client_instruction Partition: 4 Leader: 1002 Replicas: 1002,1005,1007,1008 Isr: 1007,1002,1008,1005
Topic: test_c1_prv_input_client_instruction Partition: 5 Leader: 1003 Replicas: 1003,1007,1008,1002 Isr: 1003,1007,1008,1002
Topic: test_c1_prv_input_client_instruction Partition: 6 Leader: 1006 Replicas: 1006,1008,1002,1003 Isr: 1003,1006,1008,1002
Topic: test_c1_prv_input_client_instruction Partition: 7 Leader: 1004 Replicas: 1004,1002,1003,1006 Isr: 1003,1006,1004,1002 3. I tried with console consumer, I am able to fetch data. I see the issue was that point of time.
... View more
11-25-2019
08:36 PM
Hi@RRaj Can you please elaborate more details about the issue? If possible provide some screenshot or logs for the issue you are facing. We can try to help you. Just to add - hope you already checked this and your hardware is supporting as per requirements -
... View more
11-25-2019
07:54 PM
Hi @Caranthir Can you try disabling and enabling the plugin again ? While you enable the plugin it adds/modifies below properties - Can you check if those properties are set properly after you enable ranger plugin for hdfs again? Also as already mentioned by @Shelton the repository config user must be configured if you are working in kerberized environment. If still you are not able to see the repository in Ranger UI then you can click on add symbol as shown below to add repository manually - You can specify details of namenode and other and "test connection" Monitor the logs of ranger and namenode while you test connection. If connection fails you can see errors in logs. Please post for further updates.
... View more
11-25-2019
06:35 PM
@Ram_85 In addition to what @shelton mentioned this script will give you generic recommendation setting for your cluster. It might be the case that those values from "$ python yarn-utils.py" will not help your cluster wrt performance. So kindly do refer the link to understand in details about the values you need to set for your cluster - http://crazyadmins.com/tag/yarn-tuning/ Also If you already have support subscription opted for HDP then you can better install component name "SMARTSENSE" which will help you to analyze you cluster and give recommendations.
... View more
11-25-2019
02:15 AM
Hi @songhwan By default HDFS endpoints are specified as either hostnames or IP addresses. In either case HDFS daemons will bind to a single IP address making the daemons unreachable from other networks. The solution is to have separate setting for server endpoints to force binding the wildcard IP address INADDR_ANY i.e. 0.0.0.0. Do NOT supply a port number with any of these settings. Usually in most of cluster those settings are bind to 0.0.0.0 to make namenode listen to all interfaces. In case - If you have any issues with Namenode communication [wrt load or communication issue between namenode-datanode or block report] then you need to go for tuning this properties. Please check the link for details - https://community.cloudera.com/t5/Community-Articles/Scaling-the-HDFS-NameNode-part-2/ta-p/246681 https://community.cloudera.com/t5/Community-Articles/Scaling-the-HDFS-NameNode-part-3-RPC-scalability-features/ta-p/246719
... View more
11-25-2019
01:53 AM
1 Kudo
@Kou_Bou I suspect the file is not fully downloaded or corrupted either. You can check the size or checksum of the downloaded file. You can try downloading the file from - https://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8-2177648.html The size display on link is 176.92 MB jdk-8u144-linux-x64.tar.gz
... View more
11-24-2019
05:50 PM
@Kou_Bou Thank you for the detailed output. As suspected the issue is with Java. You need to try using Oracle Java and test. Also as highlighted by @Shelton i too agree to use supported version and change java as per suggested steps. Do revert if you still face issue.
... View more
11-22-2019
02:32 AM
Hi @TanChoonKiat Please login to "sthdnn1-pvt.aiu.axiata" Switch to hbase user - $su - hbase Open other terminal for same server and do "tail -f" hbase logs try to run hbase shell and check the logs. Please upload the logs in txt file or use code sample option above to paste logs on case.
... View more
11-22-2019
02:21 AM
Hi @TanChoonKiat Can you please check and revert below - Check if hbase master is running - $ps aux | grep HMaster Check the logs for any errors in /var/log/hbase/<hbase_master_hostname>.log Please paste here if you see any error in hbase master logs.
... View more
11-22-2019
02:06 AM
Hi@m4x1m1li4n Can you confirm - Are the below ip's defined in /etc/hosts are public or private ipaddress? 13.48.140.49
13.48.181.38
13.48.185.39
13.53.62.160
13.48.18.0 Is the hostname defined in cloudera agent config.ini seems to be public hostname - "ec2-13-48-140-49.eu-north-1.compute.amazonaws.com" Can you try pointing both /etc/hosts and config.ini[hostname] to private IP within cluster and restart agent. I guess it might be the issue with public DNS.
... View more
11-21-2019
02:16 AM
@Manoj690 Can you check if you hit same error when you run this manually from cli - Login to Ambari server node and execute below command - $cd /var/lib/ambari-server/ $ambari-sudo.sh su yarn-ats -l -s /bin/bash -c 'export PATH='"'"'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/ambari-agent'"'"' ; sleep 10;export HBASE_CLASSPATH_PREFIX=/usr/hdp/3.1.0.0-78/hadoop-yarn/timelineservice/*; /usr/hdp/3.1.0.0-78/hbase/bin/hbase --config /usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator -Dhbase.client.retries.number=35 -create -s'
... View more
11-21-2019
01:13 AM
Hi Team, From logs i see there as issue on one of the worker node due to which storm topology spout was not able to consume messages. Can you help how to resolve the issue? Logs below - 2019-11-21 07:28:10.478 o.a.k.c.c.i.AbstractCoordinator Thread-22-ciFileSpout-executor[91 91] [INFO] [Consumer clientId=consumer-1, groupId=acp_c1_prv_input_client_instruction-
consumer] Marking the coordinator testnode8o.example.com:6667 (id: 2147482643 rack: null) dead
2019-11-21 07:28:10.480 o.a.k.c.c.i.AbstractCoordinator Thread-22-ciFileSpout-executor[91 91] [INFO] [Consumer clientId=consumer-1, groupId=acp_c1_prv_input_client_instruction-
consumer] Discovered coordinator testnode8o.example.com:6667 (id: 2147482643 rack: null)
2019-11-21 07:37:43.749 o.a.s.d.executor Thread-22-ciFileSpout-executor[91 91] [INFO] Deactivating spout ciFileSpout:(91)
2019-11-21 07:37:43.750 o.a.k.c.c.i.AbstractCoordinator Thread-22-ciFileSpout-executor[91 91] [INFO] [Consumer clientId=consumer-1, groupId=acp_c1_prv_input_client_instruction-
consumer] Marking the coordinator testnode8o.example.com:6667 (id: 2147482643 rack: null) dead
2019-11-21 07:37:47.346 c.c.m.a.s.c.IgniteCacheServiceRegistryPropertyImpl tcp-client-disco-reconnector-#5%null% [ERROR] Failed to reconnect to cluster (consider increasing 'ne
tworkTimeout' configuration property) [networkTimeout=5000]
2019-11-21 07:37:48.970 o.a.s.util Thread-22-ciFileSpout-executor[91 91] [ERROR] Async loop died!
java.lang.RuntimeException: java.lang.IllegalStateException: This consumer has already been closed.
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:485) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:441) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:69) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.daemon.executor$fn__10125$fn__10140$fn__10173.invoke(executor.clj:632) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) [storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
Caused by: java.lang.IllegalStateException: This consumer has already been closed.
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:1787) ~[stormjar.jar:?]
at org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1622) ~[stormjar.jar:?]
at org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79) ~[stormjar.jar:?]
at org.apache.storm.daemon.executor$metrics_tick$fn__10050.invoke(executor.clj:345) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?]
at clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?]
at clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?]
at clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?]
at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?]
at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?]
at clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?]
at clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?]
at clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$fn__6523.invoke(protocols.clj:170) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$fn__6506.invoke(protocols.clj:101) ~[clojure-1.7.0.jar:?]
at clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) ~[clojure-1.7.0.jar:?]
at clojure.core$reduce.invoke(core.clj:6519) ~[clojure-1.7.0.jar:?]
at clojure.core$into.invoke(core.clj:6600) ~[clojure-1.7.0.jar:?]
at org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.daemon.executor$fn__10125$tuple_action_fn__10131.invoke(executor.clj:520) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:469) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
... 7 more
2019-11-21 07:37:48.978 o.a.s.d.executor Thread-22-ciFileSpout-executor[91 91] [ERROR]
java.lang.RuntimeException: java.lang.IllegalStateException: This consumer has already been closed.
at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:485) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:441) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:69) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.daemon.executor$fn__10125$fn__10140$fn__10173.invoke(executor.clj:632) ~[storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) [storm-core-1.1.0.2.6.5.0-292.jar:1.1.0.2.6.5.0-292]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Storm
11-20-2019
07:19 PM
Hi @Kou_Bou Can you please check below steps version and paste output here - Check java version $java -version $ls -ltr `which java` $rpm -wa |grep openssl Check - If multiple java version are existing then check in case if there is any conflict. You can verify this by using alternatives command - $/usr/sbin/alternatives --config java [Please do not change here. Let the default java as it is] Is the issue only with one host registering in cluster or the issue is with multiple hosts? If you see java used here is openjdk then can you try with using oracle jdk and test You can change java with below command - ambari-server setup –j <jdk path> Please do revert with Ambari server and Ambari agent version $rpm -qa |grep ambari From agent node try telnet to master at port 8440 [Check for iptables/selinux rules] $telnet <ambari-server> 8440 Pass latest ambari agent config.ini file Do revert with latest error/std log if the issue still exist
... View more
11-20-2019
01:44 AM
@Kou_Bou Can you try setting below - Set below property in ambari agent config.ini $vi /etc/ambari-agent/conf/ambari-agent.ini [Note: please add below under Security section -as below] [security]
force_https_protocol=PROTOCOL_TLSv1_2 Save and exit Restart ambari-agent Please check if that works.
... View more
11-19-2019
10:44 PM
@anshuman Currently Node labels are not yet available for FairScheduler in CDH. From the latest jira update on Sep19, upstream - https://issues.apache.org/jira/browse/YARN-2497 Node labels are still not included in CDH. However i see it might come with HDP3.x/CDH6.X version as per link - https://archive.cloudera.com/cdh6/6.0.0/docs/hadoop-3.0.0-cdh6.0.0/hadoop-yarn/hadoop-yarn-site/NodeLabel.html
... View more
- Tags:
- supported
11-19-2019
06:23 PM
@Cl0ck Is it possible to share screenshot for the issue you are facing ?
... View more
11-19-2019
02:18 AM
1 Kudo
@Manoj690 Try below 2 options - 1. Check if "/var/run/ambari-metrics-collector/" directory exist and is with permission ams:hadoop If YES. Then go for option2. If Not try creating directory and check if AMS startup works. 2. Delete AMS service and its components from CLI also - rpm -qa|grep ams -Remove all components of ams And reinstall AMS. Let me know if that works. Also please share new logs in text file attach. Its easy way of formatting logs to be displayed at remote end.
... View more
11-19-2019
12:04 AM
@Manoj690 1. What is the error you are getting after following the link - https://community.cloudera.com/t5/Support-Questions/Ambari-metircs-not-started/m-p/283228#M210525%C2%A0%C2%A0 The error you pasted are you getting it while starting AMS via Ambari? Did you tried to start from CLI/backend using command "ambari-metrics-collector start"? Make sure you stop the service properly, kill if any pid exist and then start. 2. For phoenix - I see phoenix comes as part of hbase and is enabled or disabled from hbase configs as shown below. Its basically comes as part of hdp bits located in - /usr/hdp/current/phoenix-client /usr/hdp/current/phoenix-server Why do you want to completely uninstall phoenix ? Can you pass details so that we can help to understand and provide you if any workaround.
... View more