Member since
09-17-2015
436
Posts
736
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3837 | 01-14-2017 01:52 AM | |
5739 | 12-07-2016 06:41 PM | |
6607 | 11-02-2016 06:56 PM | |
2171 | 10-19-2016 08:10 PM | |
5679 | 10-19-2016 08:05 AM |
07-18-2017
09:57 AM
if you also want your sqoop command included you can expend the command like this (with set -x and set +x:
{
echo $(date)
set -x
beeline -u ${hive2jdbcZooKeeperUrl} -f "file.hql"
set +x
echo $(date)
} 2>&1 | tee /tmp/sqoop.log
... View more
02-12-2016
07:29 AM
1 Kudo
For VisualBox sandbox, you can access Zeppelin at http://YourIP:9995 instead !! It is work for me
... View more
09-21-2017
11:27 AM
A learning I had was that when you export a table, use complete HDFS URI... In some cases, I found that it helped executing the command which otherwise was failed
... View more
12-09-2015
02:49 PM
Will definitely post the result if I end up trying the patch. Thanks for your answer!
... View more
12-13-2015
05:49 PM
Spark is meant for application development. Tez is a library which is used by tools such as Hive to speed things up. Tez isn't suitable for end-user programming.
... View more
12-09-2015
07:14 PM
Note that Spark 1.5+ is needed for spark jobs of duration > 72h not to fail when their kerberos tickers expire. And you'll need to supply a keytab which the Spark AM can renew tickets with. For short-lived queries, this problem should not surface
... View more
12-04-2015
08:28 PM
Accepting this as best answer. Thanks @Ali Bajwa
... View more
12-04-2015
04:28 AM
Thanks! I will change my script to use the current dir so the jar location remains same across releases yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient.jar nnbench -operation create_write
... View more
12-10-2015
09:46 PM
1 Kudo
While storage space is absolutely critical as @Neeraj Sabharwal and @Ali Bajwa wrote in their post we just "discovered" that also CPU is a key point. When HWX released AMS we began to deploy Ambari and AMS on the same machine, but soon the understood that for a production environment it could be a good practice to use one VM for Ambari and another VM for AMS, so the really high impact on computation resources of AMS didn't impact Ambari (sometimes, during the aggregation phase we got 16 CPU at 90% for 10/15 minutes).
... View more