Member since
05-31-2016
89
Posts
14
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4208 | 03-10-2017 07:05 AM | |
6091 | 03-07-2017 09:58 AM | |
3628 | 06-30-2016 12:13 PM | |
5920 | 05-20-2016 09:15 AM | |
27623 | 05-17-2016 02:09 PM |
04-21-2021
03:45 AM
Sorry it's max 8060 characters
... View more
04-28-2020
07:28 AM
Please check below command, here 2> /dev/null will consume all the logs and error. It will now allow standard output to be shown: beeline -u jdbc:hive2://somehost_ip/ -f 2> /dev/null hive.hql >op.txt if you like this please give me kudos. Thanks!!!
... View more
01-08-2020
04:27 AM
You can also append HIVE_SKIP_SPARK_ASSEMBLY to the command which should remove the warnings. export HIVE_SKIP_SPARK_ASSEMBLY=true; hive -S --database dbname -e 'show tables;'
... View more
08-01-2018
07:19 PM
@Vani https://cwiki.apache.org/confluence/display/Hive/DynamicPartitions But here it states that , we can create partition table from CTAS by specifying the schema including partitioning columns in the create-clause. Can we create bucketed table by specifying the schema? CREATE TABLE T (key int , value string) PARTITIONED BY (ds string, hr int ) AS
SELECT key, value, ds, hr FROM srcpart ;
... View more
03-10-2017
07:05 AM
"Issue Fixed" I talked with my DevOps later and found that the classpath for Java was not set in few datanodes in the Cluster. This was stopping the shell action to invoke the JVM at those datanodes. After fixing the Classpath, the job ran successfully
... View more
03-07-2017
09:58 AM
After a tireless research on the internet I was able to crack the solution for the issue.
I have added a configuration to use the metastore server for the Hive job and it worked.
Here is what I did to the Hive action. ....
<hive xmlns='uri:oozie:hive-action:0.2'>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://10.155.1.63:9083</value>
</property>
</configuration>
<script>${dir}/gsrlQery.hql</script>
<param>OutputDir=${jobOutput}</param>
</hive>
.... Note: replace the hive metatore ip accordingly if you are trying to fix a similar problem. To get the metastore details check the hive-site.xml file located in /etc/hive/conf dir. Credit: MapR
... View more
09-15-2016
05:43 PM
Currently Spark does not support the deployment to YARN from a SparkContext. Use spark-submit instead. For unit testing it is recommended to use [local] runner. The problem is that you can not set the Hadoop conf from outside the SparkContext, it is received from *-site.xml config under HADOOP_HOME during the spark-submit. So you can not point to your remote cluster in Eclipse unless you setup the correct *-site.conf on your laptop and use spark-submit. SparkSubmit is available as a Java class, but I doubt that
you will achieve what your are looking for with it. But you would be able to
launch a spark job from Eclipse to a remote cluster, if this is sufficient for you. Have a look at the Oozie Spark launcher as an example. SparkContext is dramatically changing in Spark 2 in favor I think of SparkClient to support multiple SparkContexts. I am not sure what the situation is with that.
... View more
09-08-2016
05:49 PM
2 Kudos
@Alex Raj I am a little confused by the following statement: "We have HBase tables where the data is in in Binary Avro format." HBase stores data in HFiles and it's HBase's own format and not Avro. May be what you mean is you are exporting data from HBase into Avro and using Hive to read that data. If this is true, you can continue to do that as there are some advantages to this approach but if you want to keep data in HBase without moving it, then you can simply use Phoenix on top of HBase to read that data without moving it. In fact you can use Hive to read data in HBase. It's slow compared to Phoenix but it will do the job. May be that's what you are doing right now. On the other hand, if you want to use Phoenix on top of HBase, you can read HBase tables from Phoenix using SQL. Again, you don't have to export data. Here is a link to quick start Phoenix. The point is Avro doesn't come into play here and it's a little confusing why you are asking for Avro format. Between Phoenix and Drill, I would recommend using Phoenix because it's solely created for HBase and has better features and support compared to Drill.
... View more
07-27-2016
05:24 AM
Thanks for the reply. Let me try this will come back with a reply.
... View more