Member since
03-04-2015
96
Posts
12
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7663 | 01-04-2017 02:33 PM | |
14840 | 07-17-2015 03:11 PM |
07-21-2017
01:29 PM
That's good news. But I think the requester would like to know when Cloudera plans to integrate Spark 2 into CDH, not as a separate install (like what Hortonworks does). Thanks, Miles
... View more
07-12-2017
06:35 PM
On HDP 2.6, appending $CLASSPATH seems to break Spark2 interpreter with: "org.apache.zeppelin.interpreter.InterpreterException: Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;" Is the included Phoenix-Spark driver (phoenix-spark-4.7.0.2.6.1.0-129.jar) certified to work with Spark2? I thought it's the preferred way rather than via JDBC. Thanks!
... View more
07-07-2017
03:45 PM
I had the same problem with a valid Linux/HDFS user as Ambari ID, the solution worked - thanks!
... View more
05-22-2017
06:36 PM
Debian 8 (Jessie) has been made current stable version for a year now. When do you plan to support it? Is there any known issue that blocks its adoption?
... View more
05-22-2017
06:09 PM
We have HDP 2.4 on Debian 7. There is no /usr/lib/python2.6/site-packages/ambari_server/os_type_check.sh installed - only os_check_type.py. And all it checks is whether the current node OS matches the cluster, not whether the OS version is supported. /usr/lib/ambari-server/lib/ambari_commons/resources/os_family.json seems to list the supported OS versions (e.g. RedHat 6/7, Debian 7, Ubuntu 12/14) which matches documentation.
... View more
03-16-2017
09:20 AM
2 Kudos
First, thanks for the helpful detailed explanation. We have a similar issue of migrating from default embedded DB to a separate PostgreSQL instance. Some comments: The documentation needs to be clearer - the criteria for determining "embeddedness" you listed is not intuitive and could not have been inferred from the documentation. Your writeup should have been included right there. The embeddedness criteria seem over-strict. Insisting the DB be off-cluster is based on the old 3-tier architecture assumption - on the other hand, the Hadoop architectural principle is about co-hosting data and software. On the practical side, basing such a central component off-cluster just seems needlessly inefficient and difficult to manage. Can't the best practice be to use one dedicated node for CM, CMS, and DB? Can Cloudera provide some guidelines? For production use, the external DB option requires too many manual steps across multiple services. Can Cloudera Manager provide more central admin and integration? Including transparent migration from embedded DB. This again requires the DB node to be part of the cluster under CM management. Thanks, Miles Yao
... View more
01-19-2017
09:33 PM
Can you elaborate a bit on how to set up the environment properly in the shell wrapper before calling spark-submit? Which login to get the action to run as? (owner/yarn/spark/oozie) We've had a lot of problems getting the setup right when we implemented shell actions that wrap Hive queries (to process query output). spark-submit itself is a shell wrapper that does a lot of environment initialization, so I imagine it won't be smooth. Thanks! Miles
... View more
01-04-2017
02:33 PM
We were able to install the official parcel. The only problem encountered was that all the signature files in the repository have extension .sha1. Our CM (5.8.3) were expecting .sha . Manually renaming it allowed the install to complete.
... View more
12-13-2016
01:18 PM
Hi Cloudera folks: The new official Spark2 release looks identical to the beta version released last month. Any difference to expect if we already have the beta installed? Should we re-install? Thanks, Miles
... View more
Labels:
- Labels:
-
Apache Spark
11-08-2016
10:03 AM
Yes, that works. "CSD file" sounds like a text config file. Adding a simple description that it's a JAR in the instruction page would have clarified. Thanks again. Miles
... View more