Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1934 | 05-23-2018 05:29 AM | |
| 4970 | 05-08-2018 03:06 AM | |
| 1685 | 02-09-2018 02:22 AM | |
| 2716 | 01-24-2018 08:37 PM | |
| 6172 | 01-24-2018 05:43 PM |
11-15-2016
12:24 PM
Thanq @SBandaru By unchecking Skip group modifications during install, I am able to install and configure using LDAP.
... View more
10-17-2016
11:43 PM
@SBandaru To my knowledge, it's not advisable to change the pre-defined ambari scripts. These changes might affect the the potential upgrade of ambari server in future.
A best practice would be developing an individual maintenance script to change the permissions of the hive warehouse.
... View more
06-15-2017
05:30 PM
I have installed Spark 2.1 manually on HDP 2.3.4 while there is another version Spark 1.5 already installed via HDP. When i try to run jobs in yarn cluster mode spark 2.1 is resolving to HDP 2.3.4 spark libraries and resulting in bad substitution errors. Any ideas how you were able to resolve this when using two spark versions ?
... View more
10-04-2016
05:34 PM
1 Kudo
Some extra thoughts on top of Rajeshbabu's reply: 1. Increase the heapsize of the Phoenix Query Server via the PHOENIX_QUERYSERVER_OPTS variable hbase-env.sh 2. For writing data, make sure the addBatch() and executeBatch() API calls are used for the best performance
... View more
01-11-2019
02:36 PM
@Constantin Stanca Hi, could you please why there could be a split-brain situation when the number of zookeeper nodes is even? Thanks~
... View more
09-01-2016
04:38 PM
1 Kudo
Yes. The Phoenix JDBC does not require the Phoenix Query Server. The Phoenix Query Server provides a "thin" JDBC driver.
... View more
08-30-2016
03:38 PM
1 Kudo
@SBandaru Like @Frank Lu mentioned, there are some things that are not supported when using TDCH with Sqoop. For example, creating the schema automatically with "sqoop import" doesn't work. Instead, use "sqoop create-hive-table" to create the schema first and then do "sqoop import". For more info see: https://community.hortonworks.com/content/kbentry/53531/importing-data-from-teradata-into-hive.html
... View more
08-18-2016
06:59 PM
1 Kudo
hdfs dfsadmin -setBalancerBandwidth 100000000 on all the DN and the client we ran the command below hdfs balancer -Dfs.defaultFS=hdfs://<NN_HOSTNAME>:8020 -Ddfs.balancer.movedWinWidth=5400000 -Ddfs.balancer.moverThreads=1000 -Ddfs.balancer.dispatcherThreads=200 -Ddfs.datanode.balance.max.concurrent.moves=5 -Ddfs.balance.bandwidthPerSec=100000000 -Ddfs.balancer.max-size-to-move=10737418240 -threshold 5 This will faster balance your HDFS data between datanodes and do this when the cluster is not heavily used. Hope this helps you.
... View more
Labels:
08-10-2016
09:50 PM
According the document we have to go for the minor upgrade. How much time will it take to upgrade it ?
... View more
08-10-2016
08:14 PM
That documentation does not look correct to me, but maybe I am missing something 🙂 No, there is no way to change your shell profiles via Ambari.
... View more