Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 608 | 06-04-2025 11:36 PM | |
| 1168 | 03-23-2025 05:23 AM | |
| 578 | 03-17-2025 10:18 AM | |
| 2172 | 03-05-2025 01:34 PM | |
| 1369 | 03-03-2025 01:09 PM |
02-26-2021
02:15 AM
Agreed, but is there a way to avoid this wastage. apart from migrating data to LFS and then again to HDFS. Example: We have a 500MB file with block size 128 MB i.e. 4 blocks on HDFS. Now since we changed block size to 256MB, how would we make the file on HDFS to have 2 blocks of 256MB instead of 4. Please suggest.
... View more
02-02-2021
06:28 AM
@Abdullah If the sensitive props key value is obscured in the globals.xml file, you are running a newer version fo CFM then 1.0.0 where the bug existed where each node in the NiFi cluster ended up with a different random sensitive props key. In CFM 1.0.1 and newer, the user is required to set this property (it is not longer set to a random value when left blank). So perhaps you are having a different issue here? Did you change the sensitive props key in your CFM NiFi configs and then had an issue with starting your NiFi? I suggest starting a new question in the community since you are having a different issue than what is described in this thread.
... View more
01-27-2021
11:17 AM
It is not recommended to update ambari DB directly but as in this case the cmd 'ambari-server update-host-names host_names_changes.json' is not a much help, we can perform below actions by taking ambari DB back up. I tested and it worked. Note: Please take DB backup! and PLEASE only do at your own risk. In this case, the new host_name is ambarinn.cluster.com and new name is ambarinn261.cluster.com We need to concentrate on the host_name that says healthStatus":"UNKNOWN". Steps to resolve: ambari-agent and ambari-server successfully stopped [root@ambarinn261 ~]# su - postgres Last login: Wed Jan 27 13:14:40 EST 2021 on pts/0 -bash-4.2$ psql psql (9.2.24) Type "help" for help. postgres=# \c ambari You are now connected to database "ambari" as user "postgres". ambari=# select host_id,host_name,discovery_status,last_registration_time,public_host_name from ambari.hosts; host_id | host_name | discovery_status | last_registration_time | public_host_name ---------+-------------------------+------------------+------------------------+------------------------- 201 | ambarinn261.cluster.com | | 1611772466200 | ambarinn261.cluster.com 1 | ambarinn.cluster.com | | 1611768543457 | ambarinn.cluster.com (2 rows) ambari=# select * from ambari.hoststate; agent_version | available_mem | current_state | health_status | host_id | time_in_state | maintenance_state -----------------------+---------------+---------------+----------------------------------------------+---------+---------------+------------------- {"version":"2.6.1.5"} | 279948 | INIT | {"healthStatus":"HEALTHY","healthReport":""} | 201 | 1611772466200 | {"version":"2.6.1.5"} | 2120176 | INIT | {"healthStatus":"UNKNOWN","healthReport":""} | 1 | 1611768543457 | (2 rows) ambari=# UPDATE ambari.hosts SET host_name='ambarinn261.cluster.com' WHERE host_id=1; ERROR: duplicate key value violates unique constraint "uq_hosts_host_name" DETAIL: Key (host_name)=(ambarinn261.cluster.com) already exists. ambari=# UPDATE ambari.hosts SET public_host_name='ambarinn261.cluster.com' WHERE host_id=1; UPDATE 1 ambari=# UPDATE ambari.hosts SET public_host_name='ambarinn261a.cluster.com' WHERE host_id=201; UPDATE 1 ambari=# UPDATE ambari.hosts SET host_name='ambarinn261a.cluster.com' WHERE host_id=201; UPDATE 1 ambari=# UPDATE ambari.hosts SET host_name='ambarinn261.cluster.com' WHERE host_id=1; UPDATE 1 ambari=# \q -bash-4.2$ exit logout [root@ambarinn261 ~]# ambari-server start
... View more
01-27-2021
01:54 AM
@sow I am also having the same issue, did you get any resolution for this issue?
... View more
01-12-2021
08:46 PM
If we want to limit interaction of hdp/hadoop developers/data analyst or scientist, does it mean we don't need to install client in all workernodes? And we have ever found that for special case, sqoop and oozie client, are needed to be installed in all nodes include master-worker nodes, Is it related to how sqoop and oozie works?
... View more
01-06-2021
10:04 AM
hdfs,yarn,hive etc are system users, they will not have any passwords by default, however you can su from root. If you really want set passwords anyway then $ passwd hdfs command will prompt you to set a new password but i don't see a reason why anyone want to do that for system users.
... View more
01-05-2021
11:50 PM
@GangWar thank you so much for your help I assigned myself as "Power User" and it worked like charm. However I'm bit surprised as my user is admin user still I had to assign a power user role.
... View more
01-05-2021
06:12 PM
@Shelton - Thanks for your response. I am able to grant role to a user in Sentry through beeline. CREATE ROLE datascientist;
GRANT ROLE datascientist TO USER mayank; Above commands seems to work fine in Beeline, I am also able to view role in users current roles SHOW CURRENT ROLES;
+---------------+
| tab_name |
----------------|
| datascientist |
----------------- However when I execute the same command in Impala. I don't see any roles assigned to this user.
... View more
01-05-2021
11:06 AM
@saivenkatg55 My Assumptions You already executed the HDP environment preparation. If not see prepare the environment https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/prepare_the_environment.html You are running on Linux [RedHat, Centos] and you have root access! Note: Replace test.ambari.com with the output of your $ hostname -f Re-adapt to fit your cluster # root password = welcome1
# hostname = test.ambari.com
# ranger user and password is the same Steps Install the MySQL connector if not installed [Optional] # yum install -y mysql-connector-java Shutdown Ambari # ambari-server stop Re-run the below command it won't hurt # ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Backup the ambari server properties file # cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties.bak Change the timeout of the ambari server # echo 'server.startup.web.timeout=120' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.acquisition-size=5' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-age=0' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-idle-time=14400' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.max-idle-time-excess=0' >> /etc/ambari-server/conf/ambari.properties
# echo 'server.jdbc.connection-pool.idle-test-interval=7200' >> /etc/ambari-server/conf/ambari.properties Recreate a new ranger schema & Database # mysql -u root -pwelcome1
CREATE USER 'rangernew'@'%' IDENTIFIED BY 'rangernew';
GRANT ALL PRIVILEGES ON *.* TO 'rangernew'@'localhost';
CREATE USER 'rangernew'@'%' IDENTIFIED BY 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'%';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'localhost' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON rangernew.* to 'rangernew'@'localhost' identified by 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* to 'rangernew'@'test.ambari.com' identified by 'rangernew';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'test.ambari.com';
GRANT ALL PRIVILEGES ON rangernew.* TO 'rangernew'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
quit; Create the new ranger database # mysql -u rangernew -prangernew
create database rangernew;
show databases;
quit; Start the ambari server # ambari-server start
......Desired output.........
..................
.................
Ambari Server 'start' completed successfully. For ranger Ambari UI setup Use the hostname in this example test.ambari.com and the corresponding passwords Test the Ranger DB connectivity The connection test should succeed if it does then you can now start Ranger successfully. Drop the old Ranger DB # mysql -u root -pwelcome1
mysql> Drop database old_Ranger_name; The above steps should resolve your Ranger issue. Was your question answered? If so make sure to mark the answer as the accepted solution. If you find a reply useful, Kudos this answer by hitting the thumbs up button.
... View more
01-04-2021
12:55 PM
@ibrahima This community helps in 2 of the most used Hadoop flavors Cloudera and Hortonworks and these 2 software vendors handled and configured differently their Kerberos. In cloudera the keytabs are found in /run/cloudera-scm-agent/process/* while in hortonworks it's in /etc/security/keytabs/* so it would be good if you clearly stated. Please include the description of your cluster too like HA or not I see from the log failover to rm16 which suggest you have RM HA? Has the user kinited before attempting the operation. Is user impersonating cabhbwg Happy hadooping
... View more