Member since
07-09-2019
422
Posts
97
Kudos Received
58
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
434 | 07-06-2025 05:24 AM | |
445 | 05-28-2025 10:35 AM | |
2160 | 08-26-2024 08:17 AM | |
2763 | 08-20-2024 08:17 PM | |
1135 | 07-08-2024 04:45 AM |
08-31-2021
10:55 PM
1 Kudo
@KPG1 Ranger audits store in both HDFS and Solr. HDFS is used for long term , Solr will be used for short term storage. By using Solr you have data indexed and you can view it quickly from Ranger UI. Deleting older ranger audit from hdfs will not cause any issues to the service
... View more
08-30-2021
08:59 AM
@ajck, have you found a solution for your issue? If you have, can you please post the appropriate solution here? If you are still experiencing the issue, can you provide the information @Scharan has requested?
... View more
08-17-2021
05:16 AM
Remove smartsense package from server and retry yum remove smartsense-hst
rm -rf /var/log/smartsense/
... View more
08-13-2021
09:44 PM
@Nil_kharat Ticket lifetime is set in kerberos configuration file krb5.conf in MIT kerberos, You can check the lifetime of the ticket using # klist command after doing kinit You can still specify the lifetime of the ticket using -l option as shown below # kinit -l 30m -kt <Keytab> <principal>
Example:
kinit -l 30m -kt sai.keytab sai@SUPPORTLAB.CLOUDERA.COM
... View more
07-27-2021
10:36 PM
@Scharan The issue has been resolved. Summarising the resolution as below so that it helps others, As suspected, the hdp repository was not completely configured. So we deleted all the hdp repos from the servers. Created a new hdp repo in ambari-server host with the respective urls along with username and password for HDP and HDP-UTILS. Yum clean all, and yum repolist Then started with the ambari server installation as mentioned in the official documentation. Faced issues while installing hive especially with the mysql-connector-jar whose file size was around 24MB. Thanks for your help in providing the correct jar for the installation. The official documentation is very helpful for beginners like us. Now all the services are up and running successfully. Again Thank you and your team for your help in resolving this issue.
... View more
07-26-2021
06:07 AM
@enirys Can you once remove the problematic kerberos principal from FreeIPA and then try and regenerate the kerberos keytabs ipa-rmkeytab [ -p principal-name ] [ -k keytab-file ] [ -r realm ] [ -d ]
... View more
07-06-2021
11:10 AM
I was seeing the same issue, thanks to @jakezhang for posting. Changing the Ranger KMS: kerberos_princ_name from rangerkms to keyadmin allowed me to get this working. Thanks for the clues in the log file and to @Scharan
... View more
06-28-2021
11:31 AM
@Scharan Thanks for the response. So I added this in the metainfo.xml <metainfo> <schemaVersion>2.0</schemaVersion> <services> <service> ... <quickLinksConfigurations-dir>quicklinks</quickLinksConfigurations-dir> <quickLinksConfigurations> <quickLinksConfiguration> <fileName>quicklinks.json</fileName> <default>true</default> </quickLinksConfiguration> </quickLinksConfigurations> </service> </services> </metainfo> And this is the quicklinks.json file: { "name": "default", "description": "default quick links configuration", "configuration": { "protocol": { "type":"https", "checks":[ { "property":"dfs.http.policy", "desired":"HTTPS_ONLY", "site":"hdfs-site" } ] }, "links": [ { "name": "namenode_ui", "label": "NameNode UI", "url":"%@://%@:%@", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_logs", "label": "NameNode Logs", "url":"%@://%@:%@/logs", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_jmx", "label": "NameNode JMX", "url":"%@://%@:%@/jmx", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "Thread Stacks", "label": "Thread Stacks", "url":"%@://%@:%@/stacks", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } } ] } } I have restarted ambari-server but however, still do not see the quicklinks in ambari UI. Any help is much appreciated. Thanks,
... View more
06-27-2021
11:46 PM
@Scharan Below path i am using in hive-env.sh export HIVE_AUX_JARS_PATH=/mnt/apache-atlas-2.1.0/hook/hive
... View more
06-18-2021
10:10 PM
@agarwadekarm Nifi 1.12.1 is shipped with HDF 3.5.2 HDF 3.5.2 is supported on RHEL 7.9 You can check the support matrix for more info https://supportmatrix.cloudera.com/#Hortonworks
... View more