Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1578 | 06-04-2025 11:36 PM | |
| 2053 | 03-23-2025 05:23 AM | |
| 962 | 03-17-2025 10:18 AM | |
| 3677 | 03-05-2025 01:34 PM | |
| 2541 | 03-03-2025 01:09 PM |
04-03-2019
05:46 PM
@BHASKARA VENNA It's usually advisable to immediately create a local user with a sudoer privilege,this could have been your savior. Depending on your OS check this 2 links you should be able to reset the root password. Ubuntu https://www.maketecheasier.com/reset-root-password-linux/ Centos https://opensource.com/article/18/4/reset-lost-root-password
... View more
03-28-2019
05:50 PM
@Nathaniel Vala That shows a permission issue on this file /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh can you check that the file is readable and executable by user ranger and only readable by group and world r-x-r--r--
... View more
03-21-2019
10:02 PM
@Lorenc Hysi Looks like you back end database could be the cause. #################################################
# Create Schema Registry and SAM metastore:
##################################################
mysql -u root -p{root_password} create database registry;
create database streamline; CREATE USER 'registry'@'%' IDENTIFIED BY 'registry';
CREATE USER 'streamline'@'%' IDENTIFIED BY 'streamline';
GRANT ALL PRIVILEGES ON registry.* TO 'registry'@'%' WITH GRANT OPTION ;
GRANT ALL PRIVILEGES ON streamline.* TO 'streamline'@'%' WITH GRANT OPTION ;
commit;
exit;
#################################################
# Test connection
##################################################
mysql -u streamline -pstreamline;
mysql -u registry -pregistry; The above should succeed It doesn't cost you a thing rerun the below to redeploy the jdbc jars ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
... View more
03-04-2019
02:20 PM
1 Kudo
@harish Prerequisite: MUST DO If you want to copy a hive table across to another REALM you need to setup cross-realm trust between two MIT KDC's.This will enable the destination REALM user to have a valid Kerberos ticket to run operations on the source Cluster. Having said that, you should forget to revise your ranger policies to reflect the new REALM access privilege if the Ranger plugin has been enabled in the source cluster which I assume is the case to leverage the Ranger authorization. Here is a link to an HCC document that could help you set up the REALM trust PROCEDURE Follow the steps below to migrate a Hive database from one cluster to another: 1. Install Hive on the new cluster and make sure both the source and destination clusters are identical. 2. Transfer the data present in the Hive warehouse directory (/user/hive/warehouse) to the new Hadoop cluster. hadoop distcp <src> <dst> 3. Take a backup of the Hive Metastore. mysqldump hive > /tmp/mydir/backup_hive.sql 4. Install MySQL on the new Hadoop cluster. 5. Open the Hive MySQL-Metastore dump file and replace the source NameNode hostname with the destination hostname. hdfs://ip-address-old-namenode:port ---> hdfs://ip-address-new-namenode:port 6. Restore the edited MySQL dump into the MySQL of new the Hadoop cluster. mysql hive < /tmp/mydir/backup_hive.sql 7. Configure Hive as normal and perform the Hive schema upgrade if needed Impact Hive metadata contains the information about the database objects, and the contents are stored in the Hadoop Distributed File System (HDFS). Metadata contains HDFS URI and other details. Therefore, if you migrate Hive from one cluster to another cluster, you have to point the metadata to the HDFS of the new cluster. If you don't do this, it will point to the HDFS of the older cluster and the migration will fail. In case of any failure, initialize the Hive Metastore of the destination cluster and resume the migration following the correct steps. /bin/schematool -initSchema -dbType mysql On CDH If you are on Cloudera then you can proceed using Backup and Disaster recovery procedure HTH
... View more
02-24-2019
02:12 AM
@Shraddha Singh Any updates did my response resolve the issue if so accept so the thread is marked as closed.
... View more
02-12-2019
03:14 PM
@Michael Bronson Just create the home directory as follows # su - hdfs
$ hdfs dfs -mkdir /user/slider
$ hdfs dfs -chown slider:hdfs /user/slider That should be enough .. good luck
... View more
02-10-2019
10:14 PM
1 Kudo
@Michael Bronson HWX doesn't recommend upgrading an individual HDP component because one never knows the incompatibilities that could impact the other components and component selective upgrades tend to be a nightmare during a version upgrade The lastest HDP Kafka version is 11-2.1.x delivered by HDP 3.1 but ASF has its own rollout version and naming convention HTH
... View more
02-10-2019
09:31 PM
1 Kudo
@Manjunath P N The latest HDP 3.1, unfortunately, supports spark 2.3, so you will have to wait for the next major release but after the Cloudera & Hortonworks merger, my best guess is don't expect any new HDP version anytime before the release of the combined new offering Cloudera Data Platform (CDP) sometime in 2020 or thereafter. I would imagine currently HWX and CLDR should be more focused on the integration of the new products than really trying to release a newer version. The new combined offering CDP will be based on HDP 3.x and CDH 5 HTH
... View more
02-10-2019
08:54 PM
1 Kudo
@Shraddha Singh You are trying to run the dfsadmin command when the Namenode is not yet started. Please ensure the namendoe is in started status in Ambari
... View more