Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3174 | 09-16-2016 11:56 AM | |
1354 | 09-13-2016 08:47 PM | |
5344 | 09-06-2016 11:00 AM | |
3093 | 08-05-2016 11:51 AM | |
5169 | 08-03-2016 02:58 PM |
04-22-2016
10:39 AM
1 Kudo
@sanjeevan mahajan
I don't think you can define other attributes while creating table with "LIKE" option, in theory with "LIKE" it will copy the existing table definition and other attributes. Its better to use alter table after creating it with "LIKE". EDITED: I think you can change no. of buckets with alter command if you already defined it previously.
... View more
04-21-2016
03:24 PM
@krishna Please let us know if this issue got resolved after implementing suggested changes. Thanks
... View more
04-21-2016
07:28 AM
@Amit Dass Hi Amit, if you run the below command from one terminal then metastore process will run into background and even if you close the terminal it shouldn't terminate the process. So please use it with nohup. su $HIVE_USER
nohup /usr/hdp/current/hive-metastore/bin/hive --service metastore >/var/log/hive/hive.out 2>/var/log/hive/hive.log &
... View more
04-20-2016
02:28 PM
@Amit Dass Please below command to start the hivemetastore.. su $HIVE_USER
nohup /usr/hdp/current/hive-metastore/bin/hive --service metastore>/var/log/hive/hive.out 2>/var/log/hive/hive.log &
... View more
04-20-2016
01:36 PM
@krishna you simply need to execute this command on shell prompt before you run java -cp bash# export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/path/hbase<version>.jar bash# export CLASSPATH=$CLASSPATH:$HADOOP_CLASSPATH
... View more
04-20-2016
01:33 PM
@Amit Dass So looks like your hive metastore service is not running, Would you please try to start metastore service from ambari UI? please see attachement.screen-shot-2016-04-20-at-23103-pm.png
... View more
04-20-2016
11:47 AM
1 Kudo
@Sridhar Babu M
Well in general you can simply run multiple instances to spark-submit in a shell for loop with dynamic no. of cores.
Like.
for i in 1 2 3
do
spark-submit class /jar --executor-memory 2g --executor-cores 3 --master yarn --deploy-mode cluster
done
Now for scheduling a spark job, you can use oozie to schedule and run your spark action oozie-spark or may you try running spark program directly using oozie shell action here
... View more
04-20-2016
10:29 AM
2 Kudos
@Amit Dass
Can you please follow below steps and share the result?. 1. Find out the node which hosting hive metastore service, you can use ambari to figure out the ip/hostname for that node. 2. ssh root login to that hive metastore node and execute below commands. bash# ps -aef|grep -i org.apache.hadoop.hive.metastore.HiveMetaStore bash# lsof -i:9083 3. From some other node try connecting hivemetastore port using telnet. bash# telnet <hivemeta server> 9083 If 9083 port is not occupied and ps command doesn't show any metastore process then please try to restart the hivemestore service from ambari UI and perform same check again.
... View more
04-19-2016
01:59 PM
1 Kudo
@Gowrisankar Periyasamy As per design it won't wait for next 84MB data, it will directly write the 44 MB block. These blocks are referred as logical entity and internally it usage underlining ext3/ext4 disk blocks to write.
... View more
04-19-2016
11:45 AM
2 Kudos
@krishna sampath Did you tried exporting HADOOP_CLASSPATH variable in your env before running your java code? Example: export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/path/hbase<version>.jar
... View more