Member since
07-30-2013
509
Posts
113
Kudos Received
123
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2975 | 07-09-2018 11:54 AM | |
2482 | 05-03-2017 11:03 AM | |
6004 | 03-28-2017 02:27 PM | |
2309 | 03-27-2017 03:17 PM | |
2028 | 03-13-2017 04:30 PM |
03-02-2017
01:25 PM
Hi, It's best for systems (especially distributed systems) to not require careful ordering in startup. Instead, each process should wait for a bit for any dependency process (like the master) to come up. If possible, I also suggest that this wait period should be configurable, and at least 2 minutes in duration by default. There's no way for CSDs to control the ordering of start commands since we prefer robustness to ordering. Thanks, Darren
... View more
12-29-2016
10:22 AM
@zhuw.bigdata, I opened two internal Cloudera Jiras to make sure we specify that the fully-qualified domain name be used if Kerberos is enabled in the cluster. One Jira targeted the description in the HA wizard, the other Jira focused on the steps listed in our the documentation. Thanks for bringing this up! Cheers, Ben
... View more
12-22-2016
02:25 PM
1 Kudo
All, The resolution to this error is to enable the HDFS HA Enabling. Thank everyone for helping it. 1- You need to pay attention to Failover Controller (FC) already exist on the nodes that you assign to be active and standby for HDFS HA. Basically remove FC from these nodes before doing the HDFS HA Enabling. 2- Have your JournalNodes Edits Directory set up. Usually it is in /var/lib/jn Once the HDFS HA is enable, you can verify it by doing from Cloudera Manager - HDFS - Instances - Federation and High Availability <- click on it to see the setup or -HDFS - Configuration -<Do a search on nameservice> In filed NameNodes Nameservice, you should see all nodes that you assign in HDFS HA.
... View more
12-13-2016
04:59 PM
Hi, Venkat, maybe this will help you. https://community.cloudera.com/t5/Cloudera-Manager-Installation/Disabling-Kerberos/td-p/19654
... View more
10-17-2016
01:40 AM
Hi My issue was solved by updating the SUSE 11SP4. Installed the updates as the os was in initial state.Erro rwas gone after that.
... View more
09-29-2016
11:32 PM
I wanted to share my experience in case it helped a few others out there in the community. In my case I didn't have a Spark gateway on the same node as HS2 (a mis-click on my part). The issue was that the installer would not retry in a proper way (by my standards) even though I went back and fixed the role assignment. All attempts to make the change result in the same error on the same step while trying to finish my cluster install. I haven't disected the installtion but at somepoint prior to the 'final' steps it has already created the cluster layout and configuration in CM. And this can not be changed through the install wizard. I opened CM in another tab. I could see my brand new cluster with all services and a big error icon. I corrected my issue here, making sure the original assignment matched and with the Spark gateway on the HS2 node as well. Then I went back to the other tab with the install wizard and hit 'Retry'. I proceeded and finished my install. This is how you need to handle any errors that come up regarding the configuration and it has already been committed to the CM database.
... View more
09-27-2016
01:49 PM
NP and thanks! I was also confused at the first place, when adding a new host on CM UI, there is one step which parcels being downloaded, distributed and activated on all hosts. That made me think managing parcel is a required step when adding new hosts in API.
... View more
09-07-2016
03:35 PM
1 Kudo
Hi, CM will continually re-try client config deployment. This is helpful in particular if the host is temporarily not available and comes online later. It makes it easier for the administrator to reason about the state of client configs, so you don't have to worry about re-executing the command on a few random hosts that weren't operational at the time of deploy. So the retries are intended. Ideally, the deploy scripts should be so simple that they can't really fail. If you're just debugging your changes, then you can stop the CM agent (service cloudera-scm-agent stop) on that host to stop the wiping / retry logic and make it easier to debug. Thanks, Darren
... View more
08-18-2016
05:42 PM
1 Kudo
I see. Single User Mode is a CM server setting. There's no way for a CSD or Parcel to change a CM server setting, but it can be set via the REST API. If you are wrapping the CM installation and want to turn on single user mode, you can also make that REST call, but keep in mind there's quite a few other things you'd need to handle as discussed in the documentation you already linked (http://www.cloudera.com/documentation/enterprise/latest/topics/install_singleuser_reqts.html).
... View more
08-10-2016
02:48 PM
Hello Audi et al, I have seen the error below when the lock files in the Derby database directory are owned by a user other than Sqoop2. This can happen if you manually connect to the database using the Derby "ij" tool. Server startup failure
org.apache.sqoop.common.SqoopException: JDBCREPO_0007:Unable to lease link
at org.apache.sqoop.repository.JdbcRepositoryTransaction.begin(JdbcRepositoryTransaction.java:64)
<SNIP>
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:1Caused by: java.util.NoSuchElementException: Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. # ls -l total 24 -rw-r--r-- 1 root root 4 Aug 10 11:28 dbex.lck <====== -rw-r--r-- 1 root root 38 Aug 10 11:28 db.lck <======= drwxr-xr-x 2 sqoop2 sqoop2 4096 Aug 10 10:15 log drwxr-xr-x 2 sqoop2 sqoop2 4096 Aug 10 10:15 seg0 -rw-r--r-- 1 sqoop2 sqoop2 853 Aug 10 10:15 service.properties drwxr-xr-x 2 sqoop2 sqoop2 4096 Aug 10 13:49 tmp To get past this issue do the following: Ensure that there are no connections to the database Address the lock files Option 1: chown sqoop2:sqoop2 *.lck Option 2: rm -rf *.lck Start Sqoop2 Hope this helps, Markus Kemper - Cloudera Support
... View more