Member since
09-09-2014
17
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
41614 | 01-05-2015 01:24 PM |
05-06-2019
05:56 AM
Since the question was asked, the situation has changed. As soon as Hortonworks and Cloudera merged, NiFi became supported by Cloudera. Shortly after the integrations with CDH were also completed, so that NiFi is now a fully supported and integrated component. Hence the exisint answer already points you in the right direction: NiFi is likely the best fit for solving these usecases.
... View more
05-02-2018
03:51 PM
Where is this Kerberos Credentials tab in 5.14.3? I have a message that says to click the same "Generate Credentials" button but can't find it. Edit: Found it under Administration > Security > Kerberos Credentials
... View more
03-22-2018
05:59 AM
Looks like you already have another thread opened: http://community.cloudera.com/t5/Batch-SQL-Apache-Hive/Hive-Safety-Valve-configuration-is-not-applied-HiveConf-of-name/td-p/64037 Will follow up there.
... View more
10-10-2016
05:51 AM
What you'll usually find is that a Flume Agent, not to be confused with Flume itself, will be setup and execute through Cloudera Manager and run as a service there
... View more
10-15-2015
06:14 PM
Dear Friends
We need your help. We have recently updated our domain name/ip in our Kerberos Active directory authentication setting in CM. Now, we got the following health issues and we cannot start our cluster and CM service. Any help much appreciated!
We have two name nodes, one added with High availability.
Thanks much in advance and please let me know if you have any question.
Kind regards
Andy
For Yarn, Job history server, here is the error:
This role's process exited. This role is supposed to be started.
Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/1/dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
Here are the list of health issues:
cluster
HBase
HBase Master Health
catalogserver (name_node)
StateStore Connectivity
impalad (first_data_node)
StateStore Connectivity, Impala Daemon Ready Check., Web Server Status
impalad (2nd_data_node)
StateStore Connectivity, Impala Daemon Ready Check., Unexpected Exits, Web Server Status
impalad (3rd_data_node)
StateStore Connectivity, Impala Daemon Ready Check., Unexpected Exits, Web Server Status
jobhistory (name_node)
Process Status
master (name_node)
Process Status
oozie_server (name_node)
Web Server Status
----
So, here are the errors for other services in details:
A) Hdfs also has these two health issues:
1) NameNode summary: <name_node_name> (Availability: Standby, Health: Good), <2nd name node> (Availability: Stopped, Health: Bad).
This health test is bad because the Service Monitor did not find an active NameNode.
2) Details Canary test failed to create parent directory for /tmp/.cloudera_health_monitoring_canary_files.
B) Oozie error:
The Cloudera Manager Agent is not able to communicate with this role's web server.
log entry:
ERROR org.apache.oozie.servlet.V0AdminServlet
SERVER[<name_node>] USER[hue] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] URL[GET http://<name_node>:11000/oozie/v0/admin/instrumentation] error, null java.lang.UnsupportedOperationException
C) HBase 2 errors:
1) HBase Master Health
Master summary: <name_node> (Availability: Unknown, Health: Bad). This health test is bad because the Service Monitor did not find an active Master.
2) master (<short_name_node>)
Process Status
This role's process exited. This role is supposed to be started.
ERROR org.apache.hadoop.hbase.master.HMasterCommandLine
Master exiting
java.lang.RuntimeException: HMaster Aborted
------------
And here is the CM health issue with the error detail, thank you.
The Reports Manager is not running.
This role's status is as expected. The role is stopped.
WARN org.hibernate.engine.jdbc.spi.SqlExceptionHelper
SQL Error: 0, SQLState: null
3:56:25.884 PM ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper
Connections could not be acquired from the underlying database!
3:56:25.884 PM WARN com.mchange.v2.resourcepool.BasicResourcePool
com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@10ABCD -- Acquisition Attempt Failed!!! Clearing pending acquires.
While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (5). Last acquisition attempt exception:
ERROR com.cloudera.headlamp.HeadlampServer
Unable to upgrade schema to latest version.
org.hibernate.exception.GenericJDBCException: Could not open connection
... View more
01-25-2015
03:29 AM
You would need to add the machine to the cluster (so it would have a working agent and parcels) and basically enable HA using CM (HDFS service -> enable HA). If the node is healthy and all the pre-requisites are met, HA should be enabled very easily. It would require a full cluster restart though, so try performing this outside of business hours
... View more
01-05-2015
12:54 PM
Thanks much Romain for your time and attention. Couple of quick quesiton: Can you help me by providing the step I need to take from here. Last week, I have created a new post at http://community.cloudera.com/t5/Data-Ingestion-Integration/KMS-AuthenticationToken-ignored-Invalid-signature-A009-HTTP/m-p/23238 I guess our issue is related to the bug (https://issues.apache.org/jira/browse/HADOOP-11151). Is there is way you can help me find out if this issue is going to be fixed in 5.3.1 as it seems it is not fixed in 5.3. If my assumption about the existing bug is correct, I hope I can find a safe workaround for that. P.S. I can run the pig job using grunt with no problem so maybe it something related to the way Hue deals with delegation token (owner=my_active_dir_user, realuser=oozie/ourserver@our_realm) as you can see from the error message from the new post above. Appreciate your professional support. Kind regards Andy
... View more
01-01-2015
05:38 PM
Dear Friends Based on my original post from http://community.cloudera.com/t5/Web-UI-Hue-Beeswax/Pig-script-in-Hue-START-RETRY-status/m-p/23161 and the last response ( This is probably related to kms, as stated somewhere else, I hope they will be able to help about that!), I think the issue (Start_Retry halt status in Pig Editor in Hue) is due to a recent bug mentioned at https://issues.apache.org/jira/browse/HADOOP-11151 We have recently upgraded both CM and CDH to from 5.2 to 5.3 but the bug still exists. So seems it has not completely been fixed as the error messages mentioned in the URL above are exact the same as our error messages we got as below (highlighed in Bold). Would you please let us know if my understanding is correct and if not, what how do you suggest to fix the issue. If yes, does anybody know if it is going to be fixed in the future release i.e. 5.3.1. Thanks much for you attention. P.S. more on KMS: http://hadoop.apache.org/docs/current/hadoop-kms/index.html Kind regards Andy ---------- from /var/log/hadoop-kms/kms.log 2015-01-01 16:25:46,866 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: AuthenticationToken ignored: org.apache.hadoop.security.authentication.util.SignerException: Invalid signature --------- from pig job's log in hue (from Oozie dahsboar/workflows --> log tab) 2015-01-01 16:25:46,875 WARN org.apache.oozie.command.wf.ActionStartXCommand: SERVER[our_name_node.com] USER[my_active_dir_user_name] GROUP[-] TOKEN[] APP[pig-app-hue-script] JOB[0000012-141228125521164-oozie-oozi-W] ACTION[0000012-141228125521164-oozie-oozi-W@pig] Error starting action [pig]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: HTTP status [403], message [Forbidden]] org.apache.oozie.action.ActionExecutorException: JA009: HTTP status [403], message [Forbidden] ... Caused by: java.io.IOException: HTTP status [403], message [Forbidden] at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:223) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:145) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:346) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:799) at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
... View more