Member since
03-17-2016
132
Posts
106
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2494 | 03-28-2019 11:16 AM | |
3118 | 03-28-2019 09:19 AM | |
2567 | 02-02-2017 07:52 AM | |
2691 | 10-03-2016 08:08 PM | |
1142 | 09-13-2016 08:00 PM |
11-09-2022
12:06 AM
Default Page Size seems to be 200 on most APIs. Use query parameters pageSize and startIndex to page through the results
... View more
06-03-2021
05:46 AM
Hi Please check http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.1/index.html Once you enable apache nifi Ranger, you will need to add each user and the node identities in Ranger and apply policies. https://community.hortonworks.com/articles/60001/hdf-20-integrating-secured-nifi-with-secured-range... You can also check: https://community.hortonworks.com/articles/57980/hdf-20-apache-nifi-integration-with-apache-ambarir... http://bryanbende.com/development/2016/08/22/apache-nifi-1.0.0-using-the-apache-ranger-authorizer
... View more
01-06-2020
02:41 AM
@Shelton Any update on this? looks like it is looking for some java packages java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-4657625312215122883.8 (Permission denied)] can we install it externally?
... View more
12-19-2019
02:12 PM
@saivenkatg55 This "Exiting with status 1: java.io.IOException: Problem starting http server" error should be linked to your other question I just have responded to https://community.cloudera.com/t5/Support-Questions/Unable-to-start-the-node-manager/td-p/286013 If this is resolved then the java.io.IOException shouldn't occur HTH
... View more
03-29-2019
06:18 AM
Follow this https://github.com/ehiggs/spark-terasort
... View more
03-29-2019
04:35 AM
In /etc/yum.repos.d, remove all .repo files pointing to the Internet and copy only .repo files from other servers which are already using your local repo. For HDP nodes, initially you need only 2 .repo files, one for the OS, and ambari.repo. When Ambari adds a new node to the cluster it will copy there HDP.repo and HDP-UTILS.repo. Also, have you set your repository URLs in Ambari-> Admin-> Stack and versions-> Versions -> Manage Versions -> [click on your current version] ?
... View more
04-02-2019
09:06 AM
You need to follow these as those are for spark thrift Configuring Cluster Dynamic Resource Allocation Manually To configure a cluster to run Spark jobs with dynamic resource allocation, complete the following steps: Add the following properties to the spark-defaults.conf file associated with your Spark installation (typically in the $SPARK_HOME/conf directory): Set spark.dynamicAllocation.enabled to true . Set spark.shuffle.service.enabled to true . (Optional) To specify a starting point and range for the number of executors, use the following properties: spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.minExecutors spark.dynamicAllocation.maxExecutors Note that initialExecutors must be greater than or equal to minExecutors , and less than or equal to maxExecutors . For a description of each property, see Dynamic Resource Allocation Properties. Start the shuffle service on each worker node in the cluster: In the yarn-site.xml file on each node, add spark_shuffle to yarn.nodemanager.aux-services , and then set yarn.nodemanager.aux-services.spark_shuffle.class to org.apache.spark.network.yarn.YarnShuffleService . Review and, if necessary, edit spark.shuffle.service.* configuration settings. For more information, see the Apache Spark Shuffle Behavior documentation. Restart all NodeManagers in your cluster.
... View more
06-30-2017
04:23 PM
Thanks it help to resolve issue. I was missing with username principal
... View more
01-20-2017
02:51 PM
1 Kudo
@chennuri gouri shankar It looks like you have wrong version of tez.tar.gz on HDFS. Can you please verify that? If possible, please try to replace the same with latest version of tez.tar.gz. Sometimes this kind of issue happens after an upgrade if older TEZ library exists on HDFS.
... View more
12-26-2016
10:52 AM
@chennuri gouri shankar This is a known issue with the Ambari version if the database used is MySQL. Manually create the required table by using the following create table statement:
CREATE TABLE DS_JOBIMPL_<REPLACE THIS WITH THE NUMBER IN THE ACTUAL TABLE NAME> (
ds_id character varying(255) NOT NULL,
ds_applicationid character varying(2800),
ds_conffile character varying(2800),
ds_dagid character varying(2800),
ds_dagname character varying(2800),
ds_database character varying(2800),
ds_datesubmitted bigint,
ds_duration bigint,
ds_forcedcontent character varying(2800),
ds_globalsettings character varying(2800),
ds_logfile character varying(2800),
ds_owner character varying(2800),
ds_queryfile character varying(2800),
ds_queryid character varying(2800),
ds_referrer character varying(2800),
ds_sessiontag character varying(2800),
ds_sqlstate character varying(2800),
ds_status character varying(2800),
ds_statusdir character varying(2800),
ds_statusmessage character varying(2800),
ds_title character varying(2800)
);
... View more