Member since
12-15-2015
16
Posts
9
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6582 | 08-11-2016 01:39 AM |
10-01-2016
02:34 AM
Hi there, You can get the jar from here. http://repo.hortonworks.com/content/repositories/releases/org/apache/hadoop/hadoop-examples/ cheers, Andrew
... View more
08-11-2016
01:39 AM
1 Kudo
CoarseGrainedExecutorBackend should be in spark-assembly. Might be relevant to you... https://issues.apache.org/jira/browse/OOZIE-2482 https://community.hortonworks.com/articles/49479/how-to-use-oozie-shell-action-to-run-a-spark-job-i.html https://developer.ibm.com/hadoop/2015/11/05/run-spark-job-yarn-oozie/ Try setting SPARK_HOME variable in hadoop-env.sh cheers, Andrew
... View more
07-21-2016
01:30 AM
You can see examples of the various alert definitions by getting them using API. The JSON returned can be used as a template to create your own definition. https://docs.daplab.ch/ambari_cheat_sheet/
eg.
curl -v -X GET -u ${ambari_credentials} -H 'X-Requested-By:ambari' https://admin.daplab.ch/api/v1/clusters/DAPLAB02/alert_definitions?AlertDefinition/service_name=HIVE The 5 alert types are here. https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_Ambari_Users_Guide/content/_alert_definitions_and_instances.html
... View more
05-13-2016
07:27 PM
If you are looking for a way to monitor the job and determine which nodes it ran on, how many executors, etc, you can see this in the Spark Web UI located at <sparkhost>:4040 http://spark.apache.org/docs/latest/monitoring.html http://stackoverflow.com/questions/35059608/pyspark-on-cluster-make-sure-all-nodes-are-used cheers, Andrew
... View more
04-22-2016
02:10 AM
FROM salary employees will syntax above alias the salary table as employees. I guess your query needs to join on employees table with left or inner join? Also you will probably not want to overwrite the table you are selecting from? INSERT OVERWRITE TABLE employees SELECT employees.<all columns but salary_date>, salary.salary_date FROM salary inner join employees on salary.employee_number = employee.employee_number;
... View more
04-22-2016
02:05 AM
You can use lateral view & explode or inline keyword to get at the data in the struct column. https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LateralView https://www.qubole.com/resources/cheatsheet/hive-function-cheat-sheet/ cheers, Andrew
... View more
03-24-2016
08:16 PM
1 Kudo
Further details on this. Configuring the Spark History Server to Use HDFS https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_installing_manually_book/content/config-shs-hdfs.html
... View more
03-16-2016
02:02 PM
2 Kudos
Does Atlas 0.5 support ALTER statements in Hive Bridge or is it reflected in a later version? These statements don't seem to be reflected in schema for table. ALTER TABLE test SET TBLPROPERTIES ("comment"="test comment"); ALTER TABLE test CHANGE a a string comment 'test column comment';
ALTER TABLE test add COLUMNS (b string comment 'test column comment 2'); In addition, transient_lastDdlTime is set to Jan 17, 1970. This statement shows updated metadata.
CREATE TABLE test2 (b string comment 'test column comment 2'); thanks, Andrew
... View more
Labels:
- Labels:
-
Apache Atlas
03-15-2016
08:23 PM
1 Kudo
Updating link above. https://github.com/apache/incubator-atlas/blob/master/repository/src/main/scala/org/apache/atlas/query/QueryParser.scala
... View more
03-06-2016
05:17 PM
1 Kudo
Explicit denial is mentioned in next version 0.6 of Ranger. https://cwiki.apache.org/confluence/display/RANGER/Deny-conditions+and+excludes+in+Ranger+policies
... View more