Member since
03-04-2019
59
Posts
24
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5417 | 07-26-2018 08:10 PM | |
6256 | 07-24-2018 09:49 PM | |
2984 | 10-08-2017 08:00 PM | |
2541 | 07-31-2017 03:17 PM | |
882 | 12-05-2016 11:24 PM |
07-08-2021
03:38 AM
1. Login to ambari database hosted server. 2. Take the backup of database. Replace XXXXXX with correct pasword nohup mysqldump -u root -pXXXXXX --databases ambari >/ambari.sql & 3. Login to mysql with root or ambari account and remove hive keytabs. delete from kerberos_principal_host where principal_name like '%hive%'; delete from kerberos_principal where principal_name like '%hive%'; 4. Restart Ambari server. 5. Regenerate the keytabs with valid account 6. Start the Node manager. Note: its not only for Hive.. we can remove based on error. as caches in ambari database prevents to regenerate again
... View more
06-28-2020
07:21 AM
I tried your query, but if the table has no comment, it produces a duplicate record for that table. So I modified it a bit mysql -u hive -p
<ENTER YOUR HIVE PASSWORD>
use metastore;
SELECT * FROM (SELECT DBS.NAME AS OWNER, TBLS.TBL_NAME as OBJECT_NAME, TBL_COMMENTS.TBL_COMMENT as OBJECT_DESCRIPTION, TBLS.TBL_ID as OBJECT_ID, TBLS.TBL_TYPE as OBJECT_TYPE, "VALID" as OBJECT_STATUS,COLUMNS_V2.COLUMN_NAME, COLUMNS_V2.COMMENT as COLUMN_DESCRIPTION, COLUMNS_V2.TYPE_NAME AS DATA_TYPE FROM DBS JOIN TBLS ON DBS.DB_ID = TBLS.DB_ID JOIN SDS ON TBLS.SD_ID = SDS.SD_ID JOIN COLUMNS_V2 ON COLUMNS_V2.CD_ID = SDS.CD_ID JOIN ( SELECT DISTINCT TBL_ID, TBL_COMMENT FROM ( SELECT TBLS.TBL_ID TBL_ID,TABLE_PARAMS.PARAM_KEY,TABLE_PARAMS.PARAM_VALUE, TABLE_PARAMS.PARAM_VALUE as TBL_COMMENT FROM TBLS JOIN TABLE_PARAMS ON TBLS.TBL_ID = TABLE_PARAMS.TBL_ID WHERE TABLE_PARAMS.PARAM_KEY = "comment" UNION ALL SELECT TBLS.TBL_ID TBL_ID,TABLE_PARAMS.PARAM_KEY,TABLE_PARAMS.PARAM_VALUE, "" as TBL_COMMENT FROM TBLS JOIN TABLE_PARAMS ON TBLS.TBL_ID = TABLE_PARAMS.TBL_ID WHERE TABLE_PARAMS.PARAM_KEY <> "comment" AND TBLS.TBL_ID NOT IN (SELECT TBL_ID FROM TABLE_PARAMS WHERE TABLE_PARAMS.PARAM_KEY = "comment") ) TBL_COMMENTS_INTERNAL) TBL_COMMENTS ON TBLS.TBL_ID = TBL_COMMENTS.TBL_ID) as view WHERE OWNER = "database_name_goes_here" AND OBJECT_NAME = "table_name_goes_here";
... View more
01-31-2020
05:54 AM
There are properties that are set under both - service level configurations and under Admin -- Kerberos configurations, such as yarn.admin.acl What if the two properties point to different values, which properties does the service pick when required?
... View more
12-09-2019
12:58 PM
@Aaki_08 You may want to try executing following commands: 1. On the Ambari server host: ambari-server stop 2. On all hosts ambari-agent stop 3. On Ambari server host: ambari-server uninstall-mpack --mpack-name=hdf-ambari-mpack --verbose Hope this helps you, Matt
... View more
09-02-2019
06:50 PM
@adelgacem can you share sample for how you invoked Oozie using InvokHttp processor? I'm trying to run Oozie workflow from NIFI it will be a great help if you can share NIFI processor configuration details.
... View more
02-06-2019
11:24 PM
1 Kudo
@Pushpak Nand Perhaps you want to try Cloudbreak 2.9 if launching HDP 3.1 is important to you: https://community.hortonworks.com/articles/239903/introducing-cloudbreak-290-ga.html https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/index.html You can update to it if you are currently on an earlier release. It does come with default HDP 3.1 blueprints.
... View more
10-10-2017
06:13 PM
@vperiasamy I was experiencing a similar issue with my Atlas UI after this 'ABORTED' upgrade, and I did happen to try a different type of browser, and issue was resolved. I tried with Ranger as well, and it's indeed a browser issue. Thanks.
... View more
10-19-2017
09:11 PM
1 Kudo
@dsun During the upgrade process, a component is supposed to be restarted after the hdp-select command has been run so it will pick up the new binaries. However, the component needs to shut down and start up after the hdp-select command has been run. That way it will report to Ambari that it's version has changed and what it's current state is. In the event that you get stuck (as you did) during the upgrade you can unwind the versioning with a process like this: Make all pieces of the component are running Run `hdp-select set` command on all nodes in the cluster to set the new version. Make sure you get all of the pieces for the component (e.g. hadoop-hdfs-namenode, hadoop-ndfs-journalnode, etc.) Restart all processes for the component Verify that the O/S processes are running with the proper version of jar files Lather, rinse, and repeat for all components in the cluster Once you have successfully gotten everything restarted with the proper bits, you should be able to manually finalize the upgrade with the following command to the Ambari Server: ambari-server set-current --cluster=<custername> --version-display-name=HDP-2.6.2.0 If you get an error that components are not upgraded, you can check the components and hosts again. If everything seems ok, then you may need to tweak a table in the database. I ran into this when Atlas did not properly report the upgraded version to Ambari. NOTE: THIS SHOULD BE DONE WITH THE GUIDANCE OF HORTONWORKS SUPPORT ONLY ambari=> SELECT h.host_name, hcs.service_name, hcs.component_name, hcs.version FROM hostcomponentstate hcs JOIN hosts h ON hcs.host_id = h.host_id ORDER BY hcs.version, hcs.service_name, hcs.component_name, h.host_name;
host_name | service_name | component_name | version
----------------------------------+----------------+-------------------------+-------------
scregione1.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm0.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm1.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm2.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionw0.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionw1.field.hortonworks.com | ATLAS | ATLAS_CLIENT | 2.6.1.0-129
scregionm0.field.hortonworks.com | ATLAS | ATLAS_SERVER | 2.6.1.0-129
scregionm1.field.hortonworks.com | DRUID | DRUID_BROKER | 2.6.2.0-205
scregionm1.field.hortonworks.com | DRUID | DRUID_COORDINATOR | 2.6.2.0-205
scregionw0.field.hortonworks.com | DRUID | DRUID_HISTORICAL | 2.6.2.0-205
scregionw1.field.hortonworks.com | DRUID | DRUID_HISTORICAL | 2.6.2.0-205
scregionw0.field.hortonworks.com | DRUID | DRUID_MIDDLEMANAGER | 2.6.2.0-205
scregionw1.field.hortonworks.com | DRUID | DRUID_MIDDLEMANAGER | 2.6.2.0-205
scregionm2.field.hortonworks.com | DRUID | DRUID_OVERLORD | 2.6.2.0-205
scregionm2.field.hortonworks.com | DRUID | DRUID_ROUTER | 2.6.2.0-205
scregionm2.field.hortonworks.com | DRUID | DRUID_SUPERSET | 2.6.2.0-205
scregione1.field.hortonworks.com | HBASE | HBASE_CLIENT | 2.6.2.0-205
scregionm0.field.hortonworks.com | HBASE | HBASE_CLIENT | 2.6.2.0-205
scregionm1.field.hortonworks.com | HBASE | HBASE_CLIENT | 2.6.2.0-205
. . . After verifying that you have, indeed, upgraded the components, a simple update command will set the proper version for the erroneous components and allow you to finalize the upgrade: ambari=> update hostcomponentstate set version='2.6.2.0-205' where component_name = 'ATLAS_CLIENT';
UPDATE 6
ambari=> update hostcomponentstate set version='2.6.2.0-205' where component_name = 'ATLAS_SERVER';
UPDATE 1
After cycling the Ambari Server, you should be able to finalize: [root@hostname ~]# ambari-server set-current --cluster=<cluster> --version-display-name=HDP-2.6.2.0
Using python /usr/bin/python
Setting current version...
Enter Ambari Admin login: <username>
Enter Ambari Admin password:
Current version successfully updated to HDP-2.6.2.0
Ambari Server 'set-current' completed successfully.
... View more
10-09-2017
01:39 PM
Thanks @Geoffrey Shelton Okot, all privileges were granted to rangerdba properly as you can see from my original post. It's related to the upgrade, as Ambari is using the mysql-jdbc-driver under /usr/hdp/2.6.2.0-205/ranger-admin/ews/lib/ while everything else was running the 2.6.1 bits for some reason. The upgrade is stuck on some hosts, and I need to figure out why which is another issue I'm trying to resolve.
... View more
07-20-2017
08:18 PM
You need to install phoenix-server.jar to all Region and Master servers. MetaDataEndpointImpl
... View more