Member since
04-11-2016
471
Posts
325
Kudos Received
118
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2685 | 03-09-2018 05:31 PM | |
| 3551 | 03-07-2018 09:45 AM | |
| 3242 | 03-07-2018 09:31 AM | |
| 5443 | 03-03-2018 01:37 PM | |
| 2952 | 10-17-2017 02:15 PM |
09-15-2016
09:16 AM
2 Kudos
Hi, If you are talking about API exposed by NiFi, NiFi exposes a REST API: https://nifi.apache.org/docs/nifi-docs/rest-api/index.html If you want to develop your own processor in Java, this is easy and you may find interesting the following: https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html
... View more
09-15-2016
09:14 AM
2 Kudos
Hi, I'd recommend you having a look at the documentation of NiFi: https://nifi.apache.org/docs/nifi-docs/ And HDF documentation as well: http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.0.0/index.html
... View more
09-15-2016
09:12 AM
4 Kudos
Hi, you may find interesting the following posts: http://bryanbende.com/development/2016/08/22/apache-nifi-1.0.0-using-the-apache-ranger-authorizer http://bryanbende.com/development/2016/08/17/apache-nifi-1-0-0-authorization-and-multi-tenancy http://bryanbende.com/development/2016/08/31/apache-nifi-1.0.0-kerberos-authentication
... View more
09-14-2016
11:42 AM
"When Avro data is stored in a file, its schema is stored with
it, so that files may be processed later by any program."
I believe the schema is required so it is stored with the data you imported into HDFS. Could you run the following command to have more details about the error? yarn logs -applicationId application_1473774257007_0002
... View more
09-14-2016
11:34 AM
2 Kudos
I'd recommend you upgrading to a more recent version of HDP. In recent versions of Ambari you can install Grafana that proposes such graphs and much more. http://hortonworks.com/blog/advanced-metrics-visualization-dashboarding-apache-ambari/ Otherwise you have a real time representation of your current queues use in the YARN resource manager UI. If you want to do something custom, you can have a look at the REST APIs exposed by YARN: https://hadoop.apache.org/docs/r2.7.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html Hope this helps.
... View more
09-14-2016
11:29 AM
1 Kudo
You need to go where your Spark client is installed. Depending of your install/OS, it may be : /usr/hdp/current/spark-client/sbin Hope this helps.
... View more
09-14-2016
10:56 AM
Do you have a full stack trace that you could share? What is your schema (maybe some types are not yet supported with Teradata connector depending of the version)?
... View more
09-14-2016
09:27 AM
Could you try with lower case? --compression-codec lzop Based on the code: https://github.com/apache/sqoop/blob/trunk/src/java/org/apache/sqoop/io/CodecMap.java it may be case sensitive...
... View more
09-14-2016
08:30 AM
3 Kudos
Hi, Could you try: sqoop import --connect jdbc:oracle:thin:@//**.***.***.***:1521/*** --username ***** --password ******* --table COUNTRIES --target-dir /user/aps/test --compress --compression-codec LZOP -m 1 Or LZO if you want LZO instead of LZOP.
... View more
09-14-2016
08:18 AM
1 Kudo
Hi, Based on this documentation : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_HortonworksConnectorForTeradata/content/ch_HortonworksConnectorForTeradata.html#ch_HortonworksConnectorForTeradata-Appendix-Options-Sqoop I think you need to: - "Note: If you will run Avro jobs, download
avro-mapred-1.7.4-hadoop2.jar and place it under $SQOOP_HOME/lib." - Give as argument the Avro schema of the data you want to import through the option 'avroschemafile'. This is a connector-specific argument so you would need to do something like: sqoop import--connection-manager org.apache.sqoop.teradata.TeradataConnManager--connect jdbc:teradata://**.***.***.**/DATABASE=***** --username ****** --password ***** --table employee --target-dir /home/****/tera_to_hdfs125 --as-avrodatafile -m 1 -- --avroschemafile <schema> Hope this helps.
... View more