Member since
11-16-2017
28
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2448 | 01-30-2020 11:15 PM | |
2820 | 01-28-2020 11:52 PM | |
2563 | 01-28-2020 03:39 AM | |
2237 | 02-27-2018 03:02 PM |
07-27-2020
12:19 AM
Does this work for a Spark 3.0?
... View more
04-02-2020
02:12 AM
there are many repo for ambari and 2.7.5 is not paid prefer to download it directly from https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.7.5 or install 2.7.3 ambari repo for hdp 3.1.4 for ubuntu : http://www.olric.org/2019/09/install-single-node-hortonworks-data.html or https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/ch04s01s06.html for centos install this repo https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-installation/content/download_the_ambari_repo_lnx7.html
... View more
02-16-2020
05:47 PM
Thanks, I did try this and it worked out fine. Thanks!
... View more
02-07-2020
08:34 AM
You need to add it in custom hdfs site: dfs.namenode.heartbeat.recheck-interval
... View more
01-31-2020
08:08 AM
Right now Ranger doesn't provide Spark plugin. You can manage access using hdfs permission rwx.
... View more
01-28-2020
11:52 PM
It depends what you want to change: If you want just to add additional disks in all nodes follow this: Best way to create partitions like /grid/0/hadoop/hdfs/data - /grid/10/hadoop/hdfs/data and mount them to new formatted disks (its one of recommendation parameters for hdfs data mounts but you can change it): /dev/sda1 /grid/0 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0 /dev/sdb1 /grid/1 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0 /dev/sdc1 /grid/2 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0 After that just add all partitions paths in hdfs configs like: /grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data,/grid/2/hadoop/hdfs/data But dont delete existed partition from configuration because you will lost data from block which stored in /hadoop/hdfs/data. Path dont really matter just keep them separately and dont forget to make re-balance between disks.
... View more
01-28-2020
03:39 AM
1 Kudo
Best way is to join your nodes using SSSD service it will solve users directory creation problem + group mapping.
... View more
11-03-2018
03:56 AM
Following up on this. All services are up and running. Is there another tool I can use besides DBeaver to connect to HiveServer2?
... View more
03-09-2018
01:47 PM
1 Kudo
If you have some virtualization with fault tolerance option and shared storage (like VMware esxi, etc.) I will recommend you to install Ambari Server there.
... View more