Member since
02-01-2019
650
Posts
143
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2612 | 04-01-2019 09:53 AM | |
1376 | 04-01-2019 09:34 AM | |
6476 | 01-28-2019 03:50 PM | |
1484 | 11-08-2018 09:26 AM | |
3610 | 11-08-2018 08:55 AM |
01-17-2018
08:42 PM
1 Kudo
@Michael Bronson , Any Reason why you want to manage it separately, You just need to update the repo in ambari UI and rest will be taken care by Ambari (like distributing/updating the repo in all the agents). In this way you Ambari will be able to manage the repo file consistently across all the agents.
... View more
01-17-2018
12:01 AM
@Nilesh Please refer this doc : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_kafka-component-guide/content/running-mirrormaker-with-kerberos.html
... View more
01-16-2018
11:57 PM
@Ivan Mladenov Looks like you are using the consumer properties in producer and hence kafka is ignoring. Refer : https://kafka.apache.org/0100/documentation.html for all valid configs for the version you are using.
... View more
01-16-2018
10:54 PM
@Saurabh Looks like you have python3 as default python. The print syntax has been changed in python3 and hence /etc/hadoop/conf/topology_script.py is complaining that.
... View more
12-22-2017
11:45 AM
@Phoncy Joseph You're welcome. Feel free to accept the answer if this helps you.
... View more
12-19-2017
03:38 PM
@Mario Borys This tutorial and zeppelin notebook should help you: https://hortonworks.com/tutorial/hands-on-tour-of-apache-spark-in-5-minutes/ https://raw.githubusercontent.com/hortonworks-gallery/zeppelin-notebooks/hdp-2.6/2CBTZPY14/note.json Feel free to accept the answer if this helps you.
... View more
12-19-2017
03:26 PM
1 Kudo
@Phoncy Joseph There are only 3 ways as of now: - Global configs which are in mapred-site.xml - Passing through command line while submitting the job - Hardcode in the code(although this is not dynamic).
... View more
12-18-2017
07:08 PM
@ed day: You need to copy spark jars to hdfs and configure the properties spark.yarn.jars or spark.yarn.archive appropriately. Please refer official documentation: https://spark.apache.org/docs/latest/running-on-yarn.html#preparations
... View more
12-18-2017
06:49 PM
@Mario Borys: You should register the dataframe as table and then run a select query on that registered able: equipment.registerTempTable("equipment_table")
sqlContext.sql("select * from equipment_table")
... View more
12-18-2017
06:42 PM
1 Kudo
@Phoncy Joseph
The configs can be changed when you are submitting the job and not while the job is running. Below are the configs to change them while submitting the job. -Dmapreduce.map.memory.mb=2048 -Dmapreduce.reduce.memory.mb=2048
... View more