Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2722 | 04-27-2020 03:48 AM | |
| 5283 | 04-26-2020 06:18 PM | |
| 4448 | 04-26-2020 06:05 PM | |
| 3570 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
07-26-2017
06:57 PM
@Miguel Marquez If this issue is now occurring with different packages like Spark Client now. then the chances are very high that the Local Repo might be causing the issue or incomplete installation. Looking at the yum log might help. . The package name should be "ambari-infra-solr". [root@sandbox ~]# yum info ambari-infra-solr
Installed Packages
Name : ambari-infra-solr
Arch : noarch
Version : 2.5.0.5
Release : 1
Size : 208 M
Repo : installed
From repo : ambari-2.5.0.5-1
Summary : Ambari Logsearch Assembly
URL : http://www.apache.org
License : (c) Apache Software Foundation
Description : Maven Recipe: RPM Package.
.
... View more
07-26-2017
05:24 PM
@Miguel Marquez The following error usually occurs when the binaries of the components are not correctly/completely installed. resource_management.core.exceptions.Fail: StaticFile('/usr/lib/ambari-infra-solr-client/solrCloudCli.sh') Source file /usr/lib/ambari-infra-solr-client/solrCloudCli.sh is not found . Please check you do not have any issue with your repositories access. Some of the files might be missed during installation and that might be causing the missing file. . Attempting to reinstall the ambari-infra components (specially client) will be advised. Manual "yum" command can be used.
... View more
07-26-2017
05:01 PM
1 Kudo
@Marc Charbonneau The following error indicates that your JDBC URL is not correct. May be the Database Name is missing at the end of the host port of the DB. java.sql.SQLException: No suitable driver found for jdbc:postgresql://REMOVED FOR SECURITY
Syntax:
jdbc:postgresql://db_host:port/databaseName
Example:
jdbc:postgresql://erie1.example.com:5432/ambari
Example: sudo -u hdfs sqoop import --connect jdbc:postgresql://erie1.example.com:5432/ambari --username ambari --password bigdata --table hosts --hcatalog-database default --hcatalog-table test_sqoop_orc_2 --create-hcatalog-table --hcatalog-storage-stanza "stored as orcfile" -m 1 --driver org.postgresql.Driver . Also please check if the Correct JDBC driver is present in the sqoop client classpath. Example: cp -f /PATH/TO/postgresql-9.3-1101-jdbc4.jar /usr/hdp/current/sqoop-client/lib/ .
... View more
07-26-2017
03:08 PM
@David Bon-Salomon
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html
... View more
07-26-2017
06:33 AM
@Akbarali Momin The following error indicates that the basename that you are using might not be correct: Caused by: java.sql.SQLRecoverableException: IO Error: JNDI Package failure javax.naming.NameNotFoundException: [LDAP: error code 32 - No Such Object]; remaining name 'cn=xxxx' . Can you please check the "cn=xxxx" (cn=OracleContext,dc=<domain>,dc=com) that you entered is correct or not? (using a simple standalone java code can help to reproduce this issue to isolate the cause) .
... View more
07-25-2017
05:39 PM
@Sedat Kestepe - The Namenode determines whether a datanode dead or alive by using heartbeats. - Each DataNode sends a Heartbeat message to the NameNode every 3 seconds (default value). - This heartbeat interval is controlled by the "dfs.heartbeat.interval" property defined in hdfs-site.xml file. - If a datanode dies, namenode waits for almost 10 mins before removing it from live nodes. - The time period for determining whether a datanode is dead is calculated as dfs.namenode.heartbeat.recheck-interval + 10 * 1000 * dfs.heartbeat.interval
The default values for "dfs.namenode.heartbeat.recheck-interval" is 300000 milliseconds(5 minutes) and dfs.heartbeat.interval is "3 seconds" .
Reference: - https://github.com/apache/hadoop/blob/release-2.7.3-RC1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java#L212 - http://pe-kay.blogspot.com/2016/02/dead-datanode-detection.html .
... View more
07-25-2017
02:10 PM
1 Kudo
@david garcia Moving ambari server involves 4 steps: 1). Stop the old ambari host and then install ambari server to new host. 2). Make the "/etc/ambari-agent/conf/ambari-agent.ini" file change on
all the agent hosts to point to new ambari server hostname.
3). Run the ambari-server setup command and then point to the Database instance where your amabri DB is present in the "Advance Database configuration Section". (If using postgres then choose option4 Postgres DB , not the Embedded Postgres DB). OR if you want to move the ambari DB as well to a new host then import the DB dump backup to your new DB host and then during the ambari-server setup command choose new host details. 4). Restart Ambari server and all the ambari agents. . Following link provides more details: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-reference/content/ch_amb_ref_moving_the_ambari_server.html
... View more
07-25-2017
04:34 AM
@Gordon Banker
After doing the SSH on port 2222 to your sandbox you can use "passwd" command to reset the password # ssh root@localhost -p 2222
Password: hadoop . [root@sandbox ~]# su - maria_dev
[maria_dev@sandbox ~]$ passwd
Changing password for user maria_dev.
Changing password for maria_dev.
New password: new_dev_maria
Retype new password: new_dev_maria
passwd: all authentication tokens updated successfully. Both the username and password to login are "maria_dev". Please see: https://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/ . You can also change the ambari user "maria_dev" password from ambari I as well. For that you will have to login to ambari as admin/admin and then you can nagivate to Ambari UI --> Manage Ambari -> Users --> maria_dev and then change the UI password for maria_dev. .
... View more
07-24-2017
12:56 PM
1 Kudo
@jorge villa Please try the following link: https://github.com/hortonworks/data-tutorials/blob/archive-hdp-2.5/tutorials/hdp/hdp-2.5/cross-component-lineage-with-apache-atlas-across-apache-sqoop-hive-kafka-storm/assets/crosscomponent_scripts.zip .
... View more
07-24-2017
02:02 AM
@vijay Rachala We see that you are getting the following error when Ambari is trying to install hadoop from yum repo. It requires "libtirpc-devel" package to be already installed on your host which should come from Operating System repo. Execution of '/usr/bin/yum -d 0 -e 0 -y install
hadoop_2_5_6_0_40-client' returned 1. Error: Package:
hadoop_2_5_6_0_40-hdfs-2.7.3.2.5.6.0-40.el6.x86_64 (HDP-2.5) Requires:
libtirpc-devel You could try using --skip-broken to work around the
problem You could try running: rpm -Va --nofiles --nodigest - Can you please make sure that the mentioned package "libtirpc-devel" is already installed on your host. You can have a look at the similar post:
https://community.hortonworks.com/questions/101877/hdp-2603-install-fails-on-rhel-68-ambari-250-libti.html .
... View more