Member since
05-09-2016
39
Posts
23
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1789 | 06-29-2017 03:11 PM | |
1114 | 06-28-2017 12:06 PM | |
1052 | 06-28-2017 08:28 AM | |
3099 | 06-21-2017 06:19 AM | |
920 | 06-13-2017 01:12 PM |
07-04-2017
11:12 AM
@Simran Kaur Please try with "mapreduce.job.priority" as "mapred.job.priority" is deprecated now.
... View more
07-04-2017
06:30 AM
@Simran Kaur You can create separate yarn queue for these jobs. You can use capacity scheduler or Yarn queue manager view in Ambari . https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-views/content/using_the_capacity_scheduler_view.html Now you can use this queue for your high priority oozie jobs https://stackoverflow.com/questions/32438052/job-queue-for-hive-action-in-oozie Kindly let me know if this helps.
... View more
06-29-2017
03:11 PM
@npandey This can happen due to conflict in RuntineDelegate from Jersey in yarn client libs and the copy in spark's assembly jar. Please refer to below article for more information. https://community.hortonworks.com/articles/101145/spark-job-failure-with-javalanglinkageerror-classc.html
Also, note that hive-site.xml should contain only Spark related properties like metastore information. You can download this for spark job from download client configs option in Ambari.
Passing the complete file(/etc/hive/conf/hive-site.xml) may have ATS related related properties which can also cause this issue.
... View more
06-29-2017
09:45 AM
@Sebastien Chausson You can do this by adding --files in the spark-opts tag of your spark action. <spark-opts>--executor-memory 20G --num-executors 50 --files hdfs://(complete hdfs path)</spark-opts> As an alternative you could use a shell action and pass your spark submit command directly to it.
... View more
06-28-2017
12:06 PM
1 Kudo
@Pranav Manwatkar Please make sure that below steps are done . These can possibly cause this behaviour. 1. Bootstrapping in target database. 2. Below properties need to be configured for Hive in both clusters. hive.metastore.event.listeners = org.apache.hive.hcatalog.listener.DbNotificationListener
hive.metastore.dml.events = true
... View more
06-28-2017
08:40 AM
1 Kudo
I am trying to load csv file to pyspark through the below query. sample = sqlContext.load(source="com.databricks.spark.csv", path = '/tmp/test/20170516.csv', header = True,inferSchema = True) But I am getting a error saying py4j.protocol.Py4JJavaError: An error occurred while calling o137.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
... View more
Labels:
- Labels:
-
Apache Spark
06-28-2017
08:28 AM
1 Kudo
@nyadav That message is not causing your workflow to fail . Please find below article which explains the same.
https://community.hortonworks.com/questions/57384/oozie-mysql-error-hortonworks-25oozie-mysql-error.html Can you please provide oozie logs for the workflow scheduled oozie job -oozie http://<oozie host>:<oozie port>/oozie -log <WF_ID>
... View more
06-21-2017
06:19 AM
@Peter Kim The issue is caused by mismatch in hostnames in Ambari and Smartsense. You can use below doc to update the name in Ambari https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-administration/content/ch_changing_host_names.html You can also try to deregister and register agent again with below steps If the host is no more accessible, use CURL command to unregister the host from SmartSense
For example, curl -u admin:admin -X PUT -d "{\"status\":\"UNREGISTERED\"}" http://HST_SERVER:9000/api/v1/agents/<AGENT_HOSTNAME>
In case a node in the HDP cluster is not listed by hst list-agents command, then it can be added to SmartSense using the following command or using Ambari Host UI add Service option: # hst setup-agent
Enter SmartSense Tool Server hostname: HST_SERVER
... View more
06-19-2017
02:35 PM
@Gaurav VatsPlease run below query followed by Ambari server restart . Post that login using "admin" as password. UPDATE ambari.users SET user_password='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' WHERE user_name='admin';
... View more
06-13-2017
01:12 PM
@rakanchi Can you try to add this using falcon CLI using below command and update the complete mirror definition in the process xml file. falocn entity -type -submit -file <process.xml> The UI validation is causing it fail.
... View more