Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1373 | 09-11-2019 10:19 AM | |
8334 | 11-26-2018 07:04 PM | |
1942 | 11-14-2018 12:10 PM | |
4052 | 11-14-2018 12:09 PM | |
2653 | 11-12-2018 01:19 PM |
12-15-2022
05:16 AM
@Bello as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
02-07-2022
04:12 AM
To fetch the policy details in Json format. Use the below command : curl -v -k -u {username} -H "Content-Type: application/json" -H "Accept: application/json" -X GET https://{Ranger_Host}:6182/service/public/v2/api/service/cm_hive/policy/ | python -m json.tool
... View more
02-02-2022
08:15 AM
Spark and Hive use separate catalogs to access SparkSQL or Hive tables in HDP 3.0 and later. The Spark catalog contains a table created by Spark. The Hive catalog contains a table created by Hive. By default, standard Spark APIs access tables in the Spark catalog. To access tables in the hive catalog, we have to edit the metastore.catalog.default property in hive-site.xml (Set that property value to 'hive' instead of 'spark'). Config File Path: $SPARK_HOME/conf/hive-site.xml Before change the config <property>
<name>metastore.catalog.default</name>
<value>spark</value>
</property> After change the config <property>
<name>metastore.catalog.default</name>
<value>hive</value>
</property>
... View more
05-29-2021
08:53 PM
second curl command which is for sessions may not work as there is no property called className in sessions REST API
... View more
02-26-2021
02:15 AM
Agreed, but is there a way to avoid this wastage. apart from migrating data to LFS and then again to HDFS. Example: We have a 500MB file with block size 128 MB i.e. 4 blocks on HDFS. Now since we changed block size to 256MB, how would we make the file on HDFS to have 2 blocks of 256MB instead of 4. Please suggest.
... View more
06-15-2020
12:07 AM
@shubh As this is an older post that has been marked solved in 2018. You would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
01-06-2020
11:07 PM
@Chittu Can you share your code example? The should be an option to specify mode='overwrite' when saving a DataFrame: myDataFrame.save(path='"/output/folder/path"', source='parquet', mode='overwrite') Please revert
... View more
12-09-2019
12:58 PM
@Aaki_08 You may want to try executing following commands: 1. On the Ambari server host: ambari-server stop 2. On all hosts ambari-agent stop 3. On Ambari server host: ambari-server uninstall-mpack --mpack-name=hdf-ambari-mpack --verbose Hope this helps you, Matt
... View more
09-11-2019
10:19 AM
@Seaport Integration with Ranger is for Atlas Security and in case you want to use tag based policies which is one of the most common use cases for Ranger + Atlas. But is not a requirement, see following link: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/configuring-atlas/content/additional_requirements_for_atlas_with_ranger_and_kerberos.html This is also mentioned in our documentation see more in section: Additional Requirements for Atlas with Kerberos without Ranger Finally, to install Atlas you are only required to have installed Infra Solr, Kafka and Hbase. Regards, Felix
... View more