Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1788 | 09-11-2019 10:19 AM | |
| 9427 | 11-26-2018 07:04 PM | |
| 2560 | 11-14-2018 12:10 PM | |
| 5563 | 11-14-2018 12:09 PM | |
| 3244 | 11-12-2018 01:19 PM |
07-05-2018
12:53 PM
1 Kudo
@Manikandan Jeyabal Here is a good hc post that goes over this setup https://community.hortonworks.com/questions/192404/kerberos-cross-realm-hdfs-access-via-spark-applica.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-05-2018
11:57 AM
@kanna k Multiple occurrences of GC (Allocation Failure) could lead to OutOfMemory but if you only see single one that isn't probably the problem. If you still think this could be due memory, please review the following HC article on how to calculate and set appropriate memory for Knox: https://community.hortonworks.com/articles/105860/how-to-calculate-and-change-jvm-heap-memory-settin.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-04-2018
10:07 PM
1 Kudo
@Zeev Grinberg Seems that feature may not be supported on mac based on this: https://answers.microsoft.com/en-us/mac/forum/macoffice2016-macexcel/activating-powerpivot-for-excel-2016-on-mac/f4b9c970-bfd6-4c01-a5a4-c943c08b82d5 If your question is related to excel product only perhaps it will be best to ask/search on Microsoft forum. Otherwise if there is something that you need help from the exercise related to HDP/HDF please provide the details including link to the exercise and what steps you need assistance with. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-04-2018
08:10 PM
@Andrey Pelegrini Please do send out an email to certification@hortonworks.com , Certification team will get in touch with you. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-04-2018
08:08 PM
1 Kudo
@Zeev Grinberg Here is the link for latest odbc driver for mac: https://s3.amazonaws.com/public-repo-1.hortonworks.com/HDP/hive-odbc/2.1.12.1017/OSX/hive-odbc-native.dmg You can find all drivers under HDP Add-Ons in the following URL: https://hortonworks.com/downloads/ HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-04-2018
01:54 PM
1 Kudo
@vincentV NA Shuffle data is serialized over the network so when deserialized its spilled to memory and this metric is aggregated on the shuffle spilled (memory) that you see in the UI Please review this: http://apache-spark-user-list.1001560.n3.nabble.com/What-is-shuffle-spill-to-memory-td10158.html "Shuffle spill (memory) is the size of the deserialized form of the data in memory at the time when we spill it, whereas shuffle spill (disk) is the size of the serialized form of the data on disk after we spill it. This is why the latter tends to be much smaller than the former. Note that both metrics are aggregated over the entire duration of the task (i.e. within each task you can spill multiple times)." HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
07-03-2018
10:17 PM
@vivek jain Good to hear that. If you think the answer and followups have helped please take a moment to login and mark as "Accepted"
... View more
07-03-2018
09:08 PM
@vivek jain Could you try running the following steps and see if that works: https://community.hortonworks.com/articles/147327/accessing-hbase-tables-and-querying-on-dataframes.html including table creation?
... View more
07-03-2018
07:21 PM
@vivek jain Please run the following from HBase shell: hbase> scan 'tableName', {'LIMIT' => 5} Also check what the describe table prints: bhase> describe ‘tableName’ Make sure you are using case-sensitive name when referencing table from spark code. HTH
... View more
07-03-2018
07:10 PM
@priyal patel You may want to look at creating a custom receiver for that rest endpoint: http://spark.apache.org/docs/latest/streaming-custom-receivers.html Another less direct option would be to use Apache Nifi (HDF) to pull the stock exchange into Kafka. Then use Spark Kafka Streaming. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more