Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2193 | 12-06-2018 12:25 PM | |
2222 | 11-27-2018 06:00 PM | |
1727 | 11-22-2018 03:42 PM | |
2776 | 11-20-2018 02:00 PM | |
5006 | 11-19-2018 03:24 PM |
09-26-2017
06:51 PM
1 Kudo
@yvora, Make the 2nd curl call to the Location which was returned in the first call. ie., http://xxx:50075/webhdfs/v1/tmp/testa/a.txt?op=CREATE&user.name=livy&namenoderpcaddress=xxx:8020&createflag=&createparent=true&overwrite=false
... View more
09-26-2017
06:37 PM
Hi @Akrem Latiwesh, Try setting spark.sql.warehouse.dir in spark conf. Thanks, Aditya
... View more
09-26-2017
06:26 PM
1 Kudo
Hi @yvora The above API looks fine. Is the API returning 201 or any other status code. Did you make sure that /tmp/testa path exists and livy has write permission on it. Thanks, Aditya
... View more
09-26-2017
11:34 AM
When I run hive shell, the below message gets printed continuously. But the queries work fine. How do I get rid of these messages mbind: Operation not permitted Thanks, Aditya
... View more
Labels:
- Labels:
-
Apache Hive
09-25-2017
04:15 PM
1 Kudo
Hi @Adrián Gil, Make sure that two kafka brokers are running. 1) Did you try restarting all the Kafka brokers from Ambari. 2) Do you see any alerts for Kafka brokers in the UI. If you do not see any alerts in the UI, please check if the brokers are actually running by running ps -ef | grep kafka Also, pass the znode while creating the topic. ie, --zookeeper bigdata:2181/kafka If this doesn't work, try running with replication factor as 1 just to confirm if it is the issue with create topics itself or the no of brokers. Thanks, Aditya
... View more
09-25-2017
11:02 AM
1 Kudo
Hi @Gayathri Devi, Hive provides lot of Date functions which you can check here. You can use them with the 'where' clause or between for your range queries. You can check this post for the best practices for hive partitioning for Dates Thanks, Aditya
... View more
09-22-2017
12:20 PM
Hi @Kiran Hebbar, In the below commands, replace the value localhost with ip/hostname of your kafka broker which is in cloud. kafka/bin/kafka-console-producer.sh \
--broker-list localhost:9092 \
--topic my-topic kafka/bin/kafka-console-consumer.sh \
--bootstrap-server localhost:9092 \
--topic my-topic \
--from-beginning Thanks, Aditya
... View more
09-20-2017
05:50 PM
3 Kudos
Hi @Ramanathan Ramaiyah, To execute the LLAP queries from Zeppelin you need to configure the interpreter. For ex: If you want to connect using jdbc, under jdbc interpreter section add the following parameters hive_interactive.url = <can be obtained from ambari. check HiveServer2 Interactive JDBC URL>
hive_interactive.driver = org.apache.hive.jdbc.HiveDriver
hive_interactive.user = <user>
hive_interactive.password = <pwd> In your zeppelin paragraph, use %jdbc(hive_interactive). Hope this helps. Thanks, Aditya
... View more
09-19-2017
10:19 AM
Hi @Sanaz Janbakhsh, Please try increasing the number of node managers and try running the jobs. Also look for the parameters under Scheduler section of Yarn (yarn.scheduler.capacity.maximum-applications / yarn.scheduler.capacity.<queue-path>.maximum-application) and (yarn.scheduler.capacity.maximum-am-resource-percent / yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent) which will restrict the number of concurrent running and pending apps. Please look at the link for more info. https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html Thanks, Aditya
... View more
09-19-2017
09:53 AM
2 Kudos
Hi @Mrinmoy Choudhury, You can use the the scripts here to generate the data (https://github.com/cartershanklin/sandbox-datagen) You can extract the tar file and run datagen.sh with different scale of data. Thanks, Aditya
... View more
- « Previous
- Next »