Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 27047 | 03-03-2020 08:12 AM | |
| 17068 | 02-28-2020 10:43 AM | |
| 4957 | 12-16-2019 12:59 PM | |
| 4717 | 11-12-2019 03:28 PM | |
| 7051 | 11-01-2019 09:01 AM |
06-13-2018
09:15 AM
@don1123 API calls cannot use SAML for authentication, so "local" database login will need to occur in order for your authentication to succeed. This means you need a user/password created in Cloudera Manager to be able to utilize it via the API. You can test your "local" (non-SAML) authentication in Cloudera Manager by navigating to the following URL: http://cm_host:cm_port/cmf/localLogin This will bypass SAML authentication and allow you to log in as a user who is in the Cloudera Manager database.
... View more
06-12-2018
11:33 PM
hi~ i am facing the same problem....can u show the steps to slove?
... View more
05-31-2018
08:15 AM
Thanks for your reply. The following link leads me to believe that what I'm seeing is expected: https://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/Spark-gateway-not-starting/m-p/49021
... View more
05-30-2018
09:59 AM
@jquevedo, No problem. The only answer we can give without understanding the details of the proposed use cases is to say "it depends". It depends on how the hadoop cluster realm and user realms are configured and it depends on what client OS they are using to access hadoop resources.
... View more
05-11-2018
10:08 AM
Hello @bgooley, thanks for the reply, i have tried in different browser (chrome and safari), but no luck. follwed the steps that are in the @manuroman link. changed the configuration and restarted the browser. still i'm getiing error. Error: HTTP ERROR 401 Problem accessing /index.html. Reason: Powered by Jetty:// Please let me know if you need any details: One Quick Question -- is there any change of solving the issue by restart the Namenode or Cloudera manager (whole cluster restart)will that works if any. Thanks & Regards, Krishna
... View more
05-09-2018
07:55 AM
Could you please mark it as answered so the community will be benifited ?
... View more
05-05-2018
01:59 PM
I got the same problem when I put the Java files in my home directory. It only worked after I put the Java files in the /usr/java directory.
... View more
04-30-2018
07:15 AM
It's work for me! Thanks.
... View more
04-23-2018
09:23 AM
Sadly this doesn't work for me- my Navigator DB doesn't have this table. But I'm not upgrading - I'm using oracle in the first place. Maybe it's another kind of installation error? Or do I need a software upgrade? $ rpm -qa | grep cloudera cloudera-manager-agent-5.12.1-1.cm5121.p0.6.el6.x86_64 cloudera-manager-daemons-5.12.1-1.cm5121.p0.6.el6.x86_64 cloudera-manager-server-5.12.1-1.cm5121.p0.6.el6.x86_64 3:33:19.872 PM ERROR SolrCore [qtp1384454980-63]: org.apache.solr.common.SolrException: Cursor functionality requires a sort containing a uniqueKey field tie breaker at org.apache.solr.search.CursorMark.<init>(CursorMark.java:104)
... View more
04-13-2018
05:01 PM
it's depend from which CDH version you are upgrading ... You need to have a look at the services you cluster includes and if it's version was changed for example: Spark, HDFS, Yarn and So on: for example i upgraded my cluster from 5.5.4 to 5.13.0 i just cared about the spark jobs since spark version was changed and there was a need to change the jobs dependancy, and minor changed we did in hive tables refreshment. I would recommend you to go to Major version -1 so to use 5.13 and use the latest minor version so i recommend 5.13.3
... View more