Member since
05-09-2016
280
Posts
58
Kudos Received
31
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3688 | 03-28-2018 02:12 PM | |
3007 | 01-09-2018 09:05 PM | |
1605 | 12-13-2016 05:07 AM | |
4947 | 12-12-2016 02:57 AM | |
4242 | 12-08-2016 07:08 PM |
02-24-2021
06:50 AM
"If your data has a range of 0 to 100000 then RMSE value of 3000 is small, but if the range goes from 0 to 1." Range going from 0 to 1 means?
... View more
03-28-2018
04:22 AM
3 Kudos
@Mushtaq Rizvi Yes. Please follow the instructions on how to add HDF components to an existent HDP 2.6.1 cluster: https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1/bk_installing-hdf-on-hdp/content/upgrading_ambari.html This is not the latest HDF, but it is compatible with HDP 2.6.1 and I was pretty happy with its stability and recommend it. You would be able to add Apache NiFi 1.5, but also Schema Registry. NiFi Registry is part of the latest HDF 3.1.x, however, you would have to install it in a separate cluster and it is not worth it the effort for what you are trying to achieve right now. I would proceed with HDP upgrade when you are ready for HDF 3.2 which will be probably launched in the next couple months. In case that you can't add another node to your cluster for NiFi, try to use one of the nodes that has low CPU utilization and some disk available for NiFi lineage data storage. It depends on how much lineage you want to preserve, but you should be probably fine with several tens of GB for starters. If this response helped, please vote and accept answer.
... View more
03-28-2018
02:25 PM
1 Kudo
Hi @Mushtaq Rizvi, that sounds like a creative and good idea. I'm glad you are working something out that others can learn from. Thanks for posting!
... View more
01-09-2018
09:05 PM
Solved it. It was missing values for the RowKey as pointed out by the error: org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.initRowKey(HBaseTableCatalog.scala:141) at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog.<init>(HBaseTableCatalog.scala:152) at org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:209) at I created the Row object which included all dataframe columns and then it worked.
... View more
10-19-2017
08:40 PM
Thanks @Timothy Spann for your answer. These links are really helpful. I used python for Spark MLlib so will use the same for H2O as well.
... View more
04-22-2017
10:25 PM
Got it right. Actually whenever I was starting my hive shell, I was getting this warning: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. So I installed Tez(version 0.8.5) and changed the execution engine of Hive to Tez. Now all Hive queries that involve MapReduce job are running. My hive version is 2.1.1, that I guess do not work with MapReduce As per the regex, thanks alot @Ed Berezitsky,
Those regex worked.
... View more
04-20-2017
06:11 AM
@gnovak
Thanks a lot, I guess I missed that point. That has to be the reason why there is nothing in the output.
... View more
12-11-2016
10:39 AM
@Dmitry Otblesk - Please turn off maintenance mode for HDFS to allow it to start with other services after reboot.
... View more
12-19-2016
05:06 AM
https://support.rstudio.com/hc/en-us/articles/200488548-Problem-with-Plots-or-Graphics-Device post your notebook also make sure you are doing your code in a %spark.r hist(ed) is the format Almost all issues with the R interpreter turned out to be caused by an incorrectly set SPARK_HOME . The R interpreter must load a version of the SparkR package that matches the running version of Spark, and it does this by searching SPARK_HOME . If Zeppelin isn't configured to interface with Spark in SPARK_HOME , the R interpreter will not be able to connect to Spark. see: https://github.com/datalayer/zeppelin-datalayer/issues/2 Try to run this notebook https://github.com/apache/zeppelin/blob/de4049725d2d9565f04a981c34c3dbe18e0ecd35/notebook/2BWJFTXKJ/note.json
... View more
12-07-2016
08:01 PM
that makes perfect sense, thanks a lot.
... View more