Member since
07-04-2016
63
Posts
141
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1444 | 01-23-2018 11:47 AM | |
3527 | 01-02-2018 02:01 PM | |
2261 | 01-02-2018 12:23 PM | |
1246 | 12-26-2017 05:09 PM | |
1035 | 06-23-2017 08:59 AM |
06-21-2018
01:32 AM
Venkat You should be getting both header and data with this command. I have just added "hive.cli.print.header=true" to print header along with data. hive -e 'set hive.cli.print.header=true; select * from your_Table' | sed 's/[\t]/,/g' > /home/yourfile.csv Whats the result you are seeing if you just do "select * from your_Table"? Does the table have the data?
... View more
06-20-2018
05:33 AM
1 Kudo
@Abhinav Kumar Spark submit is the right way to submit the spark application as spark-submit sets up the correct classpaths for you. If you are running it as java program, then you need to take care of all these setups which would become tricky. And I guess even now you are facing the issue due to incorrect jars in classpath. Also please help me understand the usecase ? Whats the purpose of launching spark job using yarn rest api ? If you do simple spark-submit, it will take care of negotiating resources from YARN and to run the application.
... View more
06-20-2018
02:32 AM
1 Kudo
Abhinav Kumar How are you submitting the the spark job? are you doing "spark-submit" ? Please provide the command used to submit the job.
... View more
06-18-2018
05:18 AM
2 Kudos
Venkat Please try this : hive -e 'set hive.cli.print.header=true; select * from your_Table' | sed 's/[\t]/,/g' > /home/yourfile.csv Original answer https://stackoverflow.com/questions/17086642/how-to-export-a-hive-table-into-a-csv-file
... View more
01-31-2018
09:44 AM
1 Kudo
@Gerald BIDAULT
Is it feasible to install python2.7 on your centos6 cluster ? If you can install python2.7, then modify spark-env.sh to use python2.7 by changing below properties : export PYSPARK_PYTHON=<path to python 2.7>
export PYSPARK_DRIVER_PYTHON=python2.7
Steps for changing spark-env.sh : 1) Login to ambari 2) Navigate to spark service 3) Under 'Advanced spark2-env' modify 'content' to add properties as described above. Attaching screenshot.spark-changes.png
... View more
01-31-2018
09:07 AM
1 Kudo
@Long M Can you try ambari quick link to access the zeppelin UI. Attaching the screenshot so that its more evident. zeppelin-quicklink.jpg
... View more
01-31-2018
08:41 AM
2 Kudos
@Gerald BIDAULT I guess this is not possible. If you have two different versions of spark then application will fail with exception "Exception: Python in worker has different version 2.6 than that in driver 2.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set." You can also refer this question : https://community.hortonworks.com/questions/101952/zeppelin-pyspark-cannot-run-with-different-minor-v.html
... View more
01-23-2018
12:35 PM
1 Kudo
@Ravikiran Dasari Please accept the answer if it addresses your query 🙂 or let me know if you need any further information.
... View more
01-23-2018
11:47 AM
3 Kudos
@Ravikiran Dasari Yes. Ambari 2.5.2 does have Oozie view which is also called workflow designer. Please follow the steps to enable the view https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_workflow-management/content/config_wfm_view.html Also use this https://community.hortonworks.com/articles/82964/getting-started-with-apache-ambari-workflow-design.html to get started with workflow designer.
... View more
01-02-2018
02:01 PM
3 Kudos
@Michael Bronson I guess you would be using curl, so providing the example in curl. This one is for the PUT call. [root@ctr-e136-1513029738776-28711-01-000002 ~]# curl -XPUT -u admin:admin --header X-Requested-By:ambari http://172.27.67.14:8080/api/v1/clusters/cl1/hosts/ctr-e136-1513029738776-28711-01-000002.hwx.site/host_components -d ' {"RequestInfo":{"context":"Stop All Host Components","operation_level":{"level":"HOST","cluster_name":"cl1","host_names":"ctr-e136-1513029738776-28711-01-000002.hwx.site"},"query":"HostRoles/component_name.in(JOURNALNODE,SPARK_JOBHISTORYSERVER)"},"Body":{"HostRoles":{"state":"STARTED"}}}' "http://<ambari_server_host>:8080/api/v1/clusters/cl1/hosts/<host_name>/host_components" this is a GET call and this doesn't require any request body.
... View more