Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2714 | 04-27-2020 03:48 AM | |
| 5273 | 04-26-2020 06:18 PM | |
| 4443 | 04-26-2020 06:05 PM | |
| 3566 | 04-13-2020 08:53 PM | |
| 5375 | 03-31-2020 02:10 AM |
06-07-2017
07:39 AM
1 Kudo
@ed day Regarding your query "Can anyone tell me how to check the Java use by HDP/Hive?" You can use the following command to find out which Java Binary it is using . Example: # ps -ef | grep ^hive
hive 23474 1 0 Jun02 ? 00:45:46 /usr/jdk64/jdk1.7.0_67/bin/java -Xmx1024m -Dhdp.version=2.5.3.0-37 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.5.3.0-37 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.5.3.0-37/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console.......... . Whatever Java binary path it shows , you can run the version command on it Example
# /usr/jdk64/jdk1.7.0_67/bin/java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) . Changing the java version can be achieved as mentioned in ; https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-administration/content/ch_changing_the_jdk_version_on_an_existing_cluster.html
... View more
06-07-2017
07:25 AM
@John Cleveland One thing i noticed in your command is the PATH of the student_data ... it has "user/admin" should not it be "/user/admin" (A slash is missing) ? LOAD 'user/admin/Pig_Data/student_data.txt' . Are you running the job using logged in user "vagrant" in that case it will try finding the file from the home directory of that user. Because we are specifying the path as "user/admin......"
... View more
06-06-2017
09:22 AM
1 Kudo
@kotesh banoth Looks like you were earlier using OLD "Tech Preview" Zeppelin.
.
Please try to delete it completely and then start server again. You can delete the Zeppelin from Ambari Database. Step0). Stop Ambari Server. Take Zeppelin Notebook/interpreter backup. # ambari-server stop Step1). First Take a Latest DB Dump. (before making any DB change we should take a DB dump). Step2). Delete the Zepelin service complete from ambari DB using the following DB Queries: delete from hostcomponentstate where service_name = 'ZEPPELIN';
delete from hostcomponentdesiredstate where service_name = 'ZEPPELIN';
delete from servicecomponentdesiredstate where service_name = 'ZEPPELIN';
delete from servicedesiredstate where service_name = 'ZEPPELIN';
delete from alert_current where history_id in (select alert_id from alert_history where service_name = 'ZEPPELIN');
delete from alert_notice where history_id in (select alert_id from alert_history where service_name = 'ZEPPELIN');
delete from alert_history where service_name = 'ZEPPELIN';
delete from alert_grouping where definition_id in (select definition_id from alert_definition where service_name = 'ZEPPELIN');
delete from alert_history where alert_definition_id in (select definition_id from alert_definition where service_name = 'ZEPPELIN');
delete from alert_current where definition_id in (select definition_id from alert_definition where service_name = 'ZEPPELIN');
delete from alert_definition where service_name = 'ZEPPELIN';
delete from alert_group_target where group_id in ( select group_id from alert_group where service_name = 'ZEPPELIN');
delete from alert_group where service_name = 'ZEPPELIN';
delete from serviceconfighosts where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZEPPELIN');
delete from serviceconfigmapping where service_config_id in (select service_config_id from serviceconfig where service_name = 'ZEPPELIN');
delete from serviceconfig where service_name = 'ZEPPELIN';
delete from requestresourcefilter where service_name = 'ZEPPELIN';
delete from requestoperationlevel where service_name = 'ZEPPELIN';
delete from clusterservices where service_name ='ZEPPELIN';
delete from clusterconfig where type_name like 'zeppelin%';
delete from clusterconfigmapping where type_name like 'zeppelin%';
Step3). Start the Ambari Server. # ambari-server start .
... View more
06-06-2017
04:31 AM
@white wartih AMS scripts uses standard Python "psutils" module to find the disk_io_counters. As mentioned in : https://github.com/apache/ambari/blob/release-2.5.0/ambari-metrics/ambari-metrics-host-monitoring/src/main/python/psutil/psutil/__init__.py#L1705-L1726 File "/usr/lib/python2.6/site-packages/resource_monitoring/psutil/build/lib.linux-x86_64-2.7/psutil/__init__.py", line 1726, in disk_io_counters raise RuntimeError("couldn't find any physical disk")
The Error which python is detecting is "couldn't find any physical disk" So can you please check if there are any disk issues? Do you see any issues with the following commands? # df -h
# du .
... View more
06-06-2017
03:52 AM
1 Kudo
@white wartih
When you are trying to start the AMS collector from ambari UI, Do you see any error on the AMS logs? If the output of the following command is not returning any value, means the AMS collector is not running, So we need to first check the AMS log for any error / exception. Can you pelase share the error/exceptions observed in the ambari-metrics-collector.log ? netstat -tanlp |grep 6188 .
... View more
06-05-2017
04:54 PM
1 Kudo
@Mahmoud Shash Regarding your error: ERROR [main] ModuleFileUnmarshaller:141 - Cannot parse
/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.1.xml
.
.
Caused by: org.xml.sax.SAXParseException; systemId:
file:/var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.1.xml;
lineNumber: 24; columnNumber: 22; cvc-complex-type.2.4.a: Invalid
content was found starting with element 'downgrade-allowed'. One of
'{upgrade-path, order}' is expected. at . Can you please open that file and see what is there on line 24 Can you please share the file here. /var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades/nonrolling-upgrade-2.1.xml .
... View more
06-05-2017
04:45 PM
@Mahmoud Shash
As we see that you are getting the following error: 1) Error injecting constructor, org.apache.ambari.server.AmbariException: Stack Definition Service at '/var/lib/ambari-server/resources/common-services/NIFI/1.0.0/metainfo.xml' doesn't contain a metainfo.xml file
at org.apache.ambari.server.stack.StackManager.<init>(StackManager.java:144)
while locating org.apache.ambari.server.stack.StackManager annotated with interface com.google.inject.assistedinject.Assisted
at org.apache.ambari.server.api.services.AmbariMetaInfo.init(AmbariMetaInfo.java:261)
at org.apache.ambari.server.api.services.AmbariMetaInfo.class(AmbariMetaInfo.java:135)
while locating org.apache.ambari.server.api.services.AmbariMetaInfo
1 error
at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:987)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1013)
at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.checkServiceConfigs(DatabaseConsistencyCheckHelper.java:790)
at org.apache.ambari.server.checks.DatabaseConsistencyCheckHelper.runAllDBChecks(DatabaseConsistencyCheckHelper.java:178)
at org.apache.ambari.server.checks.DatabaseConsistencyChecker.main(DatabaseConsistencyChecker.java:106)
Caused by: org.apache.ambari.server.AmbariException: Stack Definition Service at '/var/lib/ambari-server/resources/common-services/NIFI/1.0.0/metainfo.xml' doesn't contain a metainfo.xml file
at org.apache.ambari.server.stack.ServiceDirectory.parseMetaInfoFile(ServiceDirectory.java:392) . So can you please try the following: # mv /var/lib/ambari-server/resources/common-services/NIFI /tmp
. Then restart your process. I am suspecting that there is something missing or incorrect (corrupted) inside your following directory, So it would be better if you can compare it with any of your working environment. /var/lib/ambari-server/resources/common-services/NIFI
... View more
06-05-2017
11:36 AM
@Robin Dong Looks like your AMS HBase configurations are missing or not correct.
Are you using Embedded Mode AMS ? Do you see the following files here: # ls -lart /etc/ambari-metrics-collector/conf/
total 36
drwxr-xr-x. 3 root root 4096 Aug 18 2016 ..
-rw-r--r--. 1 ams hadoop 7868 Aug 18 2016 ams-site.xml
-rw-r--r--. 1 ams hadoop 1000 Aug 18 2016 ssl-server.xml
-rw-r--r--. 1 ams hadoop 6081 Sep 20 2016 hbase-site.xml
drwxr-xr-x. 2 ams hadoop 4096 Nov 23 2016 .
-rw-r--r--. 1 ams hadoop 1319 Apr 4 13:38 log4j.properties
-rw-r--r--. 1 ams hadoop 1283 Apr 4 13:38 ams-env.sh
. It will be easy to reinstall the AMS if you find any missing file there.
... View more
06-05-2017
08:17 AM
@Rohit Sharma It's a bug reported here: Zeppelin view doesn't work with JDK 1.8_91+ : https://issues.apache.org/jira/browse/AMBARI-18918 Either downgrade the JDK or Better Upgrade Ambari to 2.5
Ambari 2.5.1 upgrade guide: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-upgrade/content/upgrading_ambari.html
... View more
06-05-2017
04:44 AM
@Robin Dong 1. Are you running AMS in embedded mode or in external mode? 2. What are the errors you see in the ambari-metrics-collector.log file when you are trying to start it? 3. In your attached screenshot we see NameNode UI, HBaseMaster, NodeManager related critical alerts that might not be directly related to the AMS startup issue. But if in order to see what is going wrong we can take a look on those components logs as well. 4. Do you have sufficient memory on your Host where you are running these processes? # free -m
# lsof -p $PID . 5. All the hosts are configured with correct FQDN ? Means the "hostname -f" command output should be resolvable from each clusternodes. # hostname -f . 6. Also can you please check if the Hostname & Port mentioned in the critical alert mentioned in the screen shot are opened or not? Or if there is any firewall issue that is blocking the port access. # telnet ip-172-31-1-92 $PORT .
... View more