Member since
03-16-2016
707
Posts
1752
Kudos Received
203
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3910 | 09-21-2018 09:54 PM | |
4943 | 03-31-2018 03:59 AM | |
1554 | 03-31-2018 03:55 AM | |
1763 | 03-31-2018 03:31 AM | |
3916 | 03-27-2018 03:46 PM |
10-20-2016
10:36 PM
3 Kudos
@Hajime https://github.com/apache/ambari If it helped, please vote and accept as the best answer.
... View more
10-20-2016
09:00 PM
@Eyad Garelnabi Documentation may need to be updated, but with the new kernels, swappiness does not need to be set to 0. Read this article from @emaxwell: https://community.hortonworks.com/articles/33522/swappiness-setting-recommendation.html
... View more
10-19-2016
02:09 AM
4 Kudos
@Rogerio Biondi You added it only for the context of the user that you used when created the temporary function. If you want the function to be available for any user then create it as a permanent function (remove "temporary"): add jar hdfs:/user/myuser/Test-0.0.1-SNAPSHOT.jar; create function test as'Test'; This function will be available to all users until you restart Hive2 Server. +++ Pls vote and accept best answer, if any.
... View more
10-19-2016
01:53 AM
5 Kudos
@Cody Betsworth Ubuntu is not supported for deployments. Please read the last line of this page: https://github.com/apache/incubator-metron/tree/master/metron-deployment. "Support Ubuntu deployments" is in TODO list. TODO
migrate existing MySQL/GeoLite playbook Support Ubuntu deployments +++ If the response was helpful, please vote and accept as the best answer.
... View more
10-18-2016
02:15 PM
4 Kudos
@Sankaraiah Narayanasamy To include Spark-on-HBase connector as a standard Spark package, in your Spark application use: spark-shell, pyspark, or spark-submit > $SPARK_HOME/bin/spark-shell –packages zhzhan:shc:0.0.11-1.6.1-s_2.10 You can also include the package as the dependency in your SBT file as well. The format is the spark-package-name:version spDependencies += “zhzhan/shc:0.0.11-1.6.1-s_2.10” You can also use it as a Maven dependency. All options are possible.
... View more
10-18-2016
03:31 AM
4 Kudos
@Mourad Chahri Please check these articles a) CentOS/RHEL: https://linuksovi.blogspot.com/2015/11/increase-size-of-root-partition-in.html b) Debian: https://devops.profitbricks.com/tutorials/increase-the-size-of-a-linux-root-partition-without-rebooting/ ++++ If this helped, please vote/accept as best answer.
... View more
10-18-2016
03:29 AM
5 Kudos
@Prakash Dev Kumar B It is not supported. AWS linux is a branch of CentOS, but presents a few differences. This is the latest Ambari supported OS, but it is pretty much the same for old versions. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/operating_systems_requirements.html +++ If the response clarified, please vote/accept best answer.
... View more
10-18-2016
03:06 AM
5 Kudos
@Sankaraiah Narayanasamy That is supported. I am sure that you researched on using this connector. This article that points out the use of Spark 1.6.1 is supported, but it works practically with any version of Spark since 1.2.: http://hortonworks.com/blog/spark-hbase-dataframe-based-hbase-connector/. The Github confirms the same. Look at the pom.xml: https://github.com/hortonworks-spark/shc/blob/master/pom.xml, properties section: <properties> <spark.version>1.6.1</spark.version> <hbase.version>1.1.2</hbase.version> Use the Spark-on-HBase connector as a standard Spark package. +++ If the response was helpful, please vote and accept the best answer.
... View more
10-15-2016
03:32 AM
4 Kudos
@SBandaru Yes. Actually, it is technically possible and even done. HDP 2.5 includes two versions of Spark: 1.6.2 production level and 2.0 technical preview. They co-exist having different timeline server. You can add Spark 2.0 using Ambari UI and "Add Service". In this case, the reason is to provide a preview of Spark 2.0, however, it is a business decision whether it makes sense. If any of the responses was helpful, don't forget to vote/accept best answer.
... View more
10-11-2016
01:31 AM
6 Kudos
@Rajkumar Singh All broker configurations can be found in Kafka /conf folder. Broker configuration are stored in files with names like server.properties. There will be one server.properties per broker, usually named server1.properties, server2.properties etc. +++++++++ If any of the responses was helpful, please don't forget to vote and accept the best answer to your question.
... View more