Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2035 | 08-28-2018 02:00 AM | |
1713 | 07-31-2018 06:55 AM | |
4511 | 07-26-2018 03:02 AM | |
1951 | 07-19-2018 02:30 AM | |
5188 | 05-21-2018 03:42 AM |
03-29-2017
08:51 PM
@ujj CM -> Administrator -> Settings -> Custom Service descriptor (CSD) Path CM -> Hosts -> Parcel Navigate to Custom Service Descriptor (CSD) & Parcel to understand the available options and refer the below link, it will help you to add the service https://www.cloudera.com/documentation/enterprise/5-5-x/topics/cm_mc_addon_services.html
... View more
03-29-2017
11:45 AM
@hkumar449 ok i think that i've overlooked, Since you have mentioned scala, i thought you are using scala from spark.. if you are not using spark then you can ignore my comment
... View more
03-28-2017
01:37 PM
@hkumar449 This may help you! make sure hbase-site.xml is available under spark configuration So either you have to copy/paste hbase-site.xml to /etc/spark/conf (or) create a softlink and try again
... View more
03-27-2017
12:30 PM
1 Kudo
@aj You can achive this by giving fully qualified path. ## To use HDFS path hdfs://<cluster-node>:8020/user/<path> ## To use Local path file:///home/<path> Some additional Notes: It is not recommended to have logs in HDFS for two reasons 1. HDFS maintains 3 replication factors by default. 2. If HDFS goes down, you cannot check the logs
... View more
03-22-2017
01:56 PM
@Shafiullah Yes Hue as dependency on HDFS, YARN, Hive and Oozie. So before you remove Hive, you have to remove Hue
... View more
03-22-2017
01:51 PM
@dmishraoc You can get the parameters mentioned in step 1 & 2 from yarn-site.xml and follow the step3: go to the path /var/log/hadoop-mapreduce Note: If you have 10 history file + one current file and each file size is 201M, then you are good. It is automatically purging & you don't need to purge anything
... View more
03-21-2017
12:31 PM
@dmishraoc Step1: CM -> Yarn -> Configuration -> Search for the below two parameters JobHistory Server Maximum Log File Backups : <Default value: 10> JobHistory Server Max Log Size : <Default Value: 200 MiB> Step2: CM -> Yarn -> Instances -> Get the hostname for JobHistory Server Step3: Login to the above host and go to the path /var/log/hadoop-mapreduce Note: If you have 10 history file + one current file and each file size is 201M, then you are good. It is automatically purging & you don't need to purge anything Thanks Kumar
... View more
03-20-2017
09:05 PM
Is zeppelin specific to any particular vendor like hortonworks? or Can we configure Zeppelin with any vendors like hortonworks, Cloudera, MapR, etc?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
03-20-2017
01:23 PM
@geko Until you familiar with available roles/privileges, I would recommend you to use Sentry from Hue. it will auto fill all the available options (or) you just need to choose the available options. It will make your life easier Pre-request: Make sure your linux users/groups are exactly matches to Hue users/groups
... View more
03-20-2017
10:48 AM
@geko You should also consider the permission in folder owner/group where the data will be stored
... View more