Member since
05-28-2019
16
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1772 | 09-23-2017 09:23 PM | |
3863 | 06-28-2017 08:31 PM |
12-19-2018
09:35 AM
@venu devu Execute the below command on Ambari Metrics monitor node cd /usr/lib/python2.6/site-packages/resource_monitoring/psutilmake install
ambari-metrics-monitor restart Then Perform a restart of Metrics Monitor
... View more
10-16-2018
12:10 PM
Dr. Elephant Dr. Elephant is a performance monitoring and tuning tool for Hadoop and Spark. It automatically gathers a job's metrics, analyzes them, and presents them in a simple way for easy consumption. Its goal is to improve developer productivity and increase cluster efficiency by making it easier to tune the jobs. It analyzes the Hadoop and Spark jobs using a set of pluggable, configurable, rule-based heuristics that provide insights on how a job performed and then uses the results to make suggestions about how to tune the job to make it perform more efficiently. It also computes a number of metrics for a job which provides valuable information about the job performance on the cluster. Build Steps cd ~;
sudo yum update -y;
sudo yum upgrade -y;
sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
sudo yum install -y nodejs wget unzip git zip gcc-c++ make java-1.8.0-openjdk-devel;
wget https://github.com/linkedin/dr-elephant/archive/v2.1.7.tar.gz;
wget https://downloads.typesafe.com/typesafe-activator/1.3.12/typesafe-activator-1.3.12.zip;
tar -xvzf v2.1.7.tar.gz;
unzip typesafe-activator-1.3.12.zip
export ACTIVATOR_HOME=~/activator-dist-1.3.12/;
export PATH=$ACTIVATOR_HOME/bin:$PATH;
sudo npm config set strict-ssl false;
npm config set strict-ssl false;
sudo npm install ember-tooltips;
sudo npm install -g bower;
# Run using non root user
cd dr-elephant-2.1.7/;
cd web; bower install;
cd ..; ./compile.sh ;
... View more
Labels:
02-21-2018
10:34 AM
Installing Hue can be tricky as the latest version Hue 4.1.0 is available only as source code and Compiling it requires a lot of dependencies and development libraries for which there is very little information available on the Internet The best document that I could find as reference was Hue 4.1.0 Installation Guide, though there are a few packages missing in the doc After series of trial and error came up with the below steps to Compile / Install Hue successfully
Install JDK, Hue dependencies, and development Libraries $ sudo yum -y update
$ sudo yum install -y git gcc libxml2-devel libxslt-devel cyrus-sasl-devel cyrus-sasl-gssapi mysql-devel python-devel python-setuptools sqlite-devel ant gcc-c++ *sasl* krb5-devel libtidy openldap-devel wget libffi-devel gmp-devel java-1.8.0-openjdk
Install Apache Maven from public Fedora repository $ sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
$ sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
$ sudo yum install -y apache-maven
Clone Hue Git Repository git clone https://github.com/cloudera/hue.git
Build Hue from Source # Configure $PREFIX with the path where you want to install Hue
$ cd hue
$ PREFIX=/usr/share make install The above commands were executed and tested on OS : CentOS release 6.9 (Final)
Hue : Version 4.1
Date : 21/Feb/2018
... View more
Labels:
09-23-2017
09:23 PM
Ambari usually takes care of Configuring the Cluster, but sometimes manual Intervention is needed to Tune to Cluster for Performance as per the UseCase. In relation with your question for a script you can use the YARN Utility Script for Determining HDP Memory Configuration Settings - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/determine-hdp-memory-config.html Hortonworks support customers can utilize the SmartSense(R) tool, which gives configuration tuning recommendations - http://hortonworks.com/blog/introducing-hortonworks-smartsense/
... View more
09-23-2017
09:23 PM
1 Kudo
Ambari usually takes care of Configuring the Cluster, but sometimes manual Intervention is needed to Tune to Cluster for Performance as per the UseCase. In relation with your question for a script you can use the YARN Utility Script for Determining HDP Memory Configuration Settings - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/determine-hdp-memory-config.html Hortonworks support customers can utilize the SmartSense(R) tool, which gives configuration tuning recommendations - http://hortonworks.com/blog/introducing-hortonworks-smartsense/
... View more
09-23-2017
09:07 PM
@Sarnath K, Do you mean Authentication or Authorization for Spark Thrift Server. For Authentication you can Enable ACL's - http://spark.apache.org/docs/latest/security.html Authorization for Spark can be done using HDFS ACLs, which can also be managed using Ranger. LLAP enabled Spark for Coloumn Security, in which reads from HDFS go directly through LLAP. You can refer to the following KB Article - SPARKSQL, RANGER, AND LLAP VIA SPARK THRIFT SERVER FOR BI SCENARIOS TO PROVIDE ROW, COLUMN LEVEL SECURITY, AND MASKING
... View more
09-15-2017
01:16 PM
@Junichi Oda You cannot manage access_log using log4j, as the configuration AccessLogValve is hardcoded in the code.
Following logs can be managed using log4j by leveraging maxBackupIndex
UserSync
TagSync
XA Portal
Below logs cannot be managed using log4j, hence will have to leverage logrotate [ a standard tool for log rotation in linux ] - Manage Ranger Admin access_log log file growth
Access Log
Or else as mentioned by @Neeraj Sabharwal you can use cron script with the find command find /var/log/ranger -mtime +30| xargs --no-run-if-empty rm
... View more
06-28-2017
08:31 PM
2 Kudos
This is due to Snappy version mismatch between Hadoop and Pig. You can resolve this by executing the below command before loading the Grunt Shell export HADOOP_USER_CLASSPATH_FIRST=true To avoid executing the above command everytime before loading a pig grunt shell, you can streamline the above process by adding the above line of configuration in pig-env.sh and deploy the configuration file to the nodes.
... View more