Member since
07-30-2019
453
Posts
112
Kudos Received
80
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2977 | 04-12-2023 08:58 PM | |
| 5615 | 04-04-2023 11:48 PM | |
| 1950 | 04-02-2023 10:24 PM | |
| 4151 | 07-05-2019 08:38 AM | |
| 3867 | 05-13-2019 06:21 AM |
03-25-2019
11:46 AM
Thanks Akhil S Naik 🙂
... View more
12-31-2018
05:59 PM
Problem Statement : The hdfs encryption guide for HDP 3.0.1 states "In Ambari, replace the current value of dfs.permissions.superusergroup with the group name “operator”. however, this option is not available from Ambari. RootCause : As per fix of Apache bug : https://issues.apache.org/jira/browse/AMBARI-22086 ,ambari considers the dfs.permissions.superusergroup as a group property and by default the group property is not meant to be edited in ambari-ui. So the ui disabled the editing of this property by default. Workaround : Navigate to ambari-server and change this config via configs.py /var/lib/ambari-server/resources/scripts/configs.py -l <AMBARI_HOSTNAME>-t 8080-u <USER_NAME>-p <PASSWORD>-a <GET/SET/DELETE>-n <CLUSTER_NAME>-c <CONFIG_TYPE>-k <KEY>-v <VALUE>
For ex : [root@asnaik-asnaik1 ~]# /var/lib/ambari-server/resources/scripts/configs.py --help
Usage: configs.py [options]
Options:
-h, --help show this help message and exit
-t PORT, --port=PORT Optional port number for Ambari server. Default is
'8080'. Provide empty string to not use port.
-s PROTOCOL, --protocol=PROTOCOL
Optional support of SSL. Default protocol is 'http'
-a ACTION, --action=ACTION
Script action: <get>, <set>, <delete>
-l HOST, --host=HOST Server external host name
-n CLUSTER, --cluster=CLUSTER
Name given to cluster. Ex: 'c1'
-c CONFIG_TYPE, --config-type=CONFIG_TYPE
One of the various configuration types in Ambari. Ex:
core-site, hdfs-site, mapred-queue-acls, etc.
To specify credentials please use "-e" OR "-u" and "-p'":
-u USER, --user=USER
Optional user ID to use for authentication. Default is
'admin'
-p PASSWORD, --password=PASSWORD
Optional password to use for authentication. Default
is 'admin'
-e CREDENTIALS_FILE, --credentials-file=CREDENTIALS_FILE
Optional file with user credentials separated by new
line.
To specify property(s) please use "-f" OR "-k" and "-v'":
-f FILE, --file=FILE
File where entire configurations are saved to, or read
from. Supported extensions (.xml, .json>)
[root@asnaik-asnaik1 ~]# /var/lib/ambari-server/resources/scripts/configs.py -l asnaik1 -t 8080 -u admin -p admin -a set -n asnaik -c hdfs-site -k dfs.permissions.superusergroup -v hdfs,operator
2018-12-10 15:19:00,604 INFO ### Performing "set":
2018-12-10 15:19:00,604 INFO ### new property - "dfs.permissions.superusergroup":"hdfs,operator"
2018-12-10 15:19:00,663 INFO ### on (Site:hdfs-site, Tag:version1543379050314)
2018-12-10 15:19:00,675 INFO ### PUTting json into: doSet_version1544455140675467.json
2018-12-10 15:19:00,767 INFO ### NEW Site:hdfs-site, Tag:version1544455140675467
[root@asnaik-asnaik1 ~]# /var/lib/ambari-server/resources/scripts/configs.py -l asnaik1 -t 8080 -u admin -p admin -a get -n asnaik -c hdfs-site -k dfs.permissions.superusergroup |grep -i dfs.permissions.superusergroup
... View more
Labels:
12-28-2018
02:06 PM
2 Kudos
Problem statement : i am upgrading my ambari-2.6.2.2 to ambari-2.7.3 and the ambari-server upgrade commands fails with exception : java.sql.SQLSyntaxErrorException: Unknown table 'ambari_configuration' in information_schema [root@slambe-1 java]# ambari-server upgrade
Using python /usr/bin/python
Upgrading ambari-server
INFO: Upgrade Ambari Server
INFO: Updating Ambari Server properties in ambari.properties ...
INFO: Updating Ambari Server properties in ambari-env.sh ...
INFO: Original file ambari-env.sh kept
INFO: Fixing database objects owner
Ambari Server configured for MySQL. Confirm you have made a backup of the Ambari Server database [y/n] (n)? y
INFO: Upgrading database schema
ERROR: Unexpected ValueError: No JSON object could be decoded
For more info run ambari-server with -v or --verbose option when i see ambari-server logs i see the following exception: 2018-12-28 13:59:07,062 INFO [main] DBAccessorImpl:869 - Executing query: CREATE TABLE ambari_configuration (category_name VARCHAR(100) NOT NULL, property_name VARCHAR(100) NOT NULL, property_value VARCHAR(2048)) ENGINE=INNODB
2018-12-28 13:59:07,087 ERROR [main] SchemaUpgradeHelper:209 - Upgrade failed.
java.sql.SQLSyntaxErrorException: Unknown table 'ambari_configuration' in information_schema
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.StatementImpl.executeQuery(StatementImpl.java:1218)
at com.mysql.cj.jdbc.DatabaseMetaData$7.forEach(DatabaseMetaData.java:2965)
at com.mysql.cj.jdbc.DatabaseMetaData$7.forEach(DatabaseMetaData.java:2953)
at com.mysql.cj.jdbc.IterateBlock.doForAll(IterateBlock.java:56)
at com.mysql.cj.jdbc.DatabaseMetaData.getPrimaryKeys(DatabaseMetaData.java:3006)
at org.apache.ambari.server.orm.DBAccessorImpl.tableHasPrimaryKey(DBAccessorImpl.java:1086)
at org.apache.ambari.server.orm.DBAccessorImpl.addPKConstraint(DBAccessorImpl.java:577)
at org.apache.ambari.server.orm.DBAccessorImpl.addPKConstraint(DBAccessorImpl.java:588)
at org.apache.ambari.server.upgrade.UpgradeCatalog270.addAmbariConfigurationTable(UpgradeCatalog270.java:989)
at org.apache.ambari.server.upgrade.UpgradeCatalog270.executeDDLUpdates(UpgradeCatalog270.java:319)
at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:970)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:207)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:450)
2018-12-28 13:59:07,093 ERROR [main] SchemaUpgradeHelper:475 - Exception occurred during upgrade, failed
org.apache.ambari.server.AmbariException: Unknown table 'ambari_configuration' in information_schema
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:210)
at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:450)
Caused by: java.sql.SQLSyntaxErrorException: Unknown table 'ambari_configuration' in information_schema
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122) Root Cause : If you are facing the abouve exception while upgrading to ambari-2.7.0 versions the main reason will be mysql-connector.jar . Please upgrade to supported MySQL connector jar ,currently mysql-5.7 is supported version for ambari, please download connector that supports this version Please download the MYSQL Connector jar from url : https://dev.mysql.com/downloads/connector/j/5.1.html Download MYSQL connector 5.1.47 .jar from this url Change the the mysql-connector using the following docs : https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_using_hive_with_mysql.html and retry the ambari-server upgrade operation again. Please contact Hortonworks support if its a production cluster and you cannot do the steps of your own
... View more
Labels:
12-24-2018
02:04 PM
1 Kudo
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster Problem Description: I am currently using ambari-2.6.2 version. I am trying to update some configs via configs.py and my python version is python-2.7.9 and my configs.py is failing with below error : [root@asnaik1 certs]# /var/lib/ambari-server/resources/scripts/configs.py --port=8443 --action=set --host=asnaik1.openstacklocal --cluster=asnaik --config-type=kafka-env --user=admin --password=admin --key=kafka_log_dir --value=/tmp --protocol=https
2018-12-05 10:24:57,615 INFO ### Performing "set":
2018-12-05 10:24:57,615 INFO ### new property - "kafka_log_dir":"/tmp"
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/configs.py", line 364, in <module>
sys.exit(main())
File "/var/lib/ambari-server/resources/scripts/configs.py", line 343, in main
return set_properties(cluster, config_type, action_args, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 237, in set_properties
update_config(cluster, config_type, updater, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 131, in update_config
properties, attributes = config_updater(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 136, in update
properties, attributes = get_current_config(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 123, in get_current_config
config_tag = get_config_tag(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 94, in get_config_tag
response = accessor(DESIRED_CONFIGS_URL.format(cluster))
File "/var/lib/ambari-server/resources/scripts/configs.py", line 89, in do_request
raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
Exception: Problem with accessing api. Reason: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)> Root cause: starting from python 2.7.9, python will validate the SSL certificate by default and if no valid SSL certificate is configured it might fail. All the python version below 2.7.9 this is disabled by default. More Details can be found in this JIRA : https://issues.apache.org/jira/browse/AMBARI-23893 the above JIRA is fixed in ambari-2.7 version so all the versions below ambari-2.7 and having python 2.7.9 will be having this issue. Workaround : The fix of JIRA : AMBARI-23893 is this : https://github.com/apache/ambari/pull/1314/files We can actually take backup of file : /var/lib/ambari-server/resources/scripts/configs.py and use this configs.py instead . Steps 1) Navigate to /var/lib/ambari-server/resources/scripts/ cd /var/lib/ambari-server/resources/scripts/ 2) take Backup of configs.py mv configs.py configs.py_Backup 3) wget the raw github content with fix: wget https://raw.githubusercontent.com/dlysnichenko/ambari/75e0c4a6e5f2c30483bf2f783c1af0c38f3b2623/ambari-server/src/main/resources/scripts/configs.py 4) give necessary permissions chmod -R 750 configs.py 5) Retry the Operation with --unsafe option [root@asnaik1 certs]# /var/lib/ambari-server/resources/scripts/configs.py --port=8443 --action=set --host=asnaik1.openstacklocal --cluster=asnaik --config-type=kafka-env --user=admin --password=admin --key=kafka_log_dir --value=/tmp --protocol=https --unsafe NOTE: Remember to add --unsafe option as that was the change associated with the apache-jira
... View more
01-10-2019
02:19 PM
Thanks Akhil ! If your Amabri is configure with HTTPS /var/lib/ambari-server/resources/scripts/configs.py -l asn1.openstacklocal -t 8443 -s https -u admin -p admin -a set -n asnaik -c hive-env -k beeline_jdbc_url_default -v container
... View more
12-20-2018
06:01 AM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster Problem Statement: I have installed HDF-3.1 and ambari-2.6.2.2 and i am upgrading my ambari to 2.7.0.0 to upgrade my HDF to 3.2+ versions and its failing with below exception :
INFO: about to run command: /usr/java/jdk1.8.0_162/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/java/latest/mysql-connector-java.jar:/usr/share/java/mysql-connector-java.jar' org.apache.ambari.server.upgrade.SchemaUpgradeHelper > /var/log/ambari-server/ambari-server.out 2>&1
INFO:
process_pid=16599
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 1060, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 1030, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 980, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 79, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 262, in upgrade
retcode = run_schema_upgrade(args)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 162, in run_schema_upgrade
upgrade_response = json.loads(stdout)
File "/usr/lib/ambari-server/lib/ambari_simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
And inspecting the ambari-server log I found this exception :
2018-09-04 06:34:55,967 ERROR [main] SchemaUpgradeHelper:238 - Upgrade failed. java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=c174, serviceName=AMBARI_INFRA, componentName=INFRA_SOLR_CLIENT, stackInfo=HDF-3.1 at org.apache.ambari.server.state.ServiceComponentImpl.updateComponentInfo
Ps: here C174 is my cluster name Root cause : Starting from ambari-2.7.0 the order of ambari upgrade has been changed. first, we need to perform upgrade mpack command then perform the ambari-server upgrade. The order of execution will be :
ambari-server upgrade-mpack \
--mpack=http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/<version>/tars/hdf_ambari_mp/hdf-ambari-mpack-<version>-<build-number>.tar.gz \
--verbose
ambari-server upgrade
Please refer to document before upgrade : HDF-3.3 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html HDF-3.2 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html
... View more
Labels:
12-19-2018
06:08 PM
Consider my cluster is having SSO configured and my SSO certificate contains character larger than 2048 and i am upgrading to ambari-2.7.0 using the command : ambari-server upgrade
There is a high chance your upgrade will get stuck at schema upgrade phase and in the ambari-server log you will find the below exception :
Internal Exception: java.sql.BatchUpdateException: Batch entry 2 INSERT INTO ambari_configuration (property_name, category_name, property_value) VALUES ('ambari.sso.provider.certificate','sso-configuration',<LONG VERTIFICATE VALUE>') was aborted: ERROR: value too long for type character varying(2048) Call getNextException to see other errors in the batch.
Root Cause : While upgrade to ambari-2.7.x + versions ambari will move the sensitive data from ambari.properties to ambari database table : ambari_configuration . if any of this sensitive data contains more than 2048 characters it will fail. usually certificates issues wont be more than 2048 characters but some cases if its higher than 2048. you might hit the abouve error.
This issue is fixed in Ambari-2.8 as part of : https://issues.apache.org/jira/browse/AMBARI-24992
the work around here is :
Edit the ambari.properties file and remove the entry for authentication.jwt.publicKey Perform the Ambari upgrade Manually alter the database:
ALTER TABLE ambari_configuration ALTER COLUMN property_value TYPE VARCHAR(4000);
NOTE: This syntax is specific to Postgres
4. Manually insert the relevant PEM file's contents into the database
INSERT INTO ambari_configuration(category_name, property_name, property_value)
VALUES ('sso-configuration', 'ambari.sso.provider.certificate', '<CONTENT OF THE PEM FILE>');
5. Restart ambari database
... View more
Labels:
12-19-2018
05:52 PM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster
Issue Description: Ambari Showing Wrong NIFI Version After Upgrading to HDF 3.3 , as per release notes , the NIFI version in HDF-3.3 is 1.8 but ambari shows it as 1.7 in Stacks and Versions page.
Root cause : This is a known bug in hdf-ambari-mpack-3.3.0.0-165 ( the mpack used by ambari to manage HDF-3.3 ) This issue is fixedin HDF-3.3.1 mpack.
Solution : we can ignore the version shown in the stacks and services as its just ambari shows its wrong. we can verify the NIFI version in nifi ui or from hdf-select command in Host where nifi is installed.
Workaround :
Navigate to : /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.3.0.0-165/hdp-addon-services/HDF/3.3/NIFI/1.7.0
Edit the File : metainfo.xml
Change the line :
<version>1.7.0</version>
to
<version>1.8.0</version>
Restart ambari server.
... View more
Labels:
08-14-2019
07:24 AM
well written article , For those who are using oracle database, we need to commit the transaction done after this changes.
... View more
09-21-2018
09:04 AM
1 Kudo
There can be situation where you are adding a Host in Ambari and the Add Host Wizard is stuck in UI with message : Please wait while the hosts are being checked for potential problems It will be tougher for you to proceed as next button will be disabled and you migt need to wait indefinitely for the Host to respond. To analyze what happened.We might need to look on ambari-agent hosts and ambari-server logs. As usually this Host checking will take longer we can actually skip the same using the Ambari experimental Wizards Disabling HostCheck On AddHost Wizard Steps 1) navigate in another Tab to URL : http://<AMBARI-SERVER>:8080/#/experimental 2) Tick on 'disableHostCheckOnAddHostWizard' checkbox and save it : 3) Close the Add Host Wizard using close button in UI and retry the same operation again. Note : Please note this article is not applicable on Ambari-2.7.0 and higher versions.
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »