Member since
07-30-2019
453
Posts
112
Kudos Received
80
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2398 | 04-12-2023 08:58 PM | |
| 4970 | 04-04-2023 11:48 PM | |
| 1586 | 04-02-2023 10:24 PM | |
| 3469 | 07-05-2019 08:38 AM | |
| 3400 | 05-13-2019 06:21 AM |
01-10-2019
02:19 PM
Thanks Akhil ! If your Amabri is configure with HTTPS /var/lib/ambari-server/resources/scripts/configs.py -l asn1.openstacklocal -t 8443 -s https -u admin -p admin -a set -n asnaik -c hive-env -k beeline_jdbc_url_default -v container
... View more
12-20-2018
06:01 AM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster Problem Statement: I have installed HDF-3.1 and ambari-2.6.2.2 and i am upgrading my ambari to 2.7.0.0 to upgrade my HDF to 3.2+ versions and its failing with below exception :
INFO: about to run command: /usr/java/jdk1.8.0_162/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/java/latest/mysql-connector-java.jar:/usr/share/java/mysql-connector-java.jar' org.apache.ambari.server.upgrade.SchemaUpgradeHelper > /var/log/ambari-server/ambari-server.out 2>&1
INFO:
process_pid=16599
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 1060, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 1030, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 980, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 79, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 262, in upgrade
retcode = run_schema_upgrade(args)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 162, in run_schema_upgrade
upgrade_response = json.loads(stdout)
File "/usr/lib/ambari-server/lib/ambari_simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
And inspecting the ambari-server log I found this exception :
2018-09-04 06:34:55,967 ERROR [main] SchemaUpgradeHelper:238 - Upgrade failed. java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=c174, serviceName=AMBARI_INFRA, componentName=INFRA_SOLR_CLIENT, stackInfo=HDF-3.1 at org.apache.ambari.server.state.ServiceComponentImpl.updateComponentInfo
Ps: here C174 is my cluster name Root cause : Starting from ambari-2.7.0 the order of ambari upgrade has been changed. first, we need to perform upgrade mpack command then perform the ambari-server upgrade. The order of execution will be :
ambari-server upgrade-mpack \
--mpack=http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/<version>/tars/hdf_ambari_mp/hdf-ambari-mpack-<version>-<build-number>.tar.gz \
--verbose
ambari-server upgrade
Please refer to document before upgrade : HDF-3.3 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html HDF-3.2 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html
... View more
Labels:
12-26-2018
01:38 PM
Thanks bro ! i get the log files only worker exiting related ,iam unable to check the error related logs ,where can i found this.
... View more
12-19-2018
06:08 PM
Consider my cluster is having SSO configured and my SSO certificate contains character larger than 2048 and i am upgrading to ambari-2.7.0 using the command : ambari-server upgrade
There is a high chance your upgrade will get stuck at schema upgrade phase and in the ambari-server log you will find the below exception :
Internal Exception: java.sql.BatchUpdateException: Batch entry 2 INSERT INTO ambari_configuration (property_name, category_name, property_value) VALUES ('ambari.sso.provider.certificate','sso-configuration',<LONG VERTIFICATE VALUE>') was aborted: ERROR: value too long for type character varying(2048) Call getNextException to see other errors in the batch.
Root Cause : While upgrade to ambari-2.7.x + versions ambari will move the sensitive data from ambari.properties to ambari database table : ambari_configuration . if any of this sensitive data contains more than 2048 characters it will fail. usually certificates issues wont be more than 2048 characters but some cases if its higher than 2048. you might hit the abouve error.
This issue is fixed in Ambari-2.8 as part of : https://issues.apache.org/jira/browse/AMBARI-24992
the work around here is :
Edit the ambari.properties file and remove the entry for authentication.jwt.publicKey Perform the Ambari upgrade Manually alter the database:
ALTER TABLE ambari_configuration ALTER COLUMN property_value TYPE VARCHAR(4000);
NOTE: This syntax is specific to Postgres
4. Manually insert the relevant PEM file's contents into the database
INSERT INTO ambari_configuration(category_name, property_name, property_value)
VALUES ('sso-configuration', 'ambari.sso.provider.certificate', '<CONTENT OF THE PEM FILE>');
5. Restart ambari database
... View more
Labels:
12-19-2018
05:52 PM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster
Issue Description: Ambari Showing Wrong NIFI Version After Upgrading to HDF 3.3 , as per release notes , the NIFI version in HDF-3.3 is 1.8 but ambari shows it as 1.7 in Stacks and Versions page.
Root cause : This is a known bug in hdf-ambari-mpack-3.3.0.0-165 ( the mpack used by ambari to manage HDF-3.3 ) This issue is fixedin HDF-3.3.1 mpack.
Solution : we can ignore the version shown in the stacks and services as its just ambari shows its wrong. we can verify the NIFI version in nifi ui or from hdf-select command in Host where nifi is installed.
Workaround :
Navigate to : /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.3.0.0-165/hdp-addon-services/HDF/3.3/NIFI/1.7.0
Edit the File : metainfo.xml
Change the line :
<version>1.7.0</version>
to
<version>1.8.0</version>
Restart ambari server.
... View more
Labels:
12-27-2018
02:11 AM
Hey @Henry Luo , As i specified earlier, we need to fetch the version_xml from repo_version table, modify it and update the table. you can use any of UI tools like PGADMIN4(if postgres UI) or can update the same using the command I mentioned above. If you are not fully aware of Database operations you can ignore these wrong versions change as I explained above why its happening. Hope this helps.Please accept answer if it did.
... View more
12-25-2018
12:53 PM
Hi @Michael Mester, Can you please see if this commend helps you. login and accept this answer if it did. 🙂
... View more
04-22-2019
01:03 AM
thanks @asubramanian
... View more
12-10-2018
03:40 PM
For those interested in using this in PowerShell, This is the way I'm calling the REST-API: $Headers = @{'X-Requested-By' = 'ambari'}
$Body = '[{"PrivilegeInfo": { "permission_name": "VIEW.USER", "principal_name": "group_poc", "principal_type": "GROUP" } }]'
$Resp = Invoke-WebRequest -Method Post -Uri "https://<Your-Cluster-Name>/api/v1/views/HIVE/versions/2.0.0/instances/AUTO_HIVE20_INSTANCE/privileges/" -Credential <Your-Credentials> -Headers $Headers -Body $Body
... View more
01-13-2019
02:33 PM
What worked for me was setting the hostname of the system!! I had installed the the HDF cluster to xyz.local.abc but the hostname kept resetting itself after system restart so when i set the hostname to above one I was able to start the hearbeats.
... View more