Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1987 | 07-09-2019 12:53 AM | |
| 11943 | 06-23-2019 08:37 PM | |
| 9197 | 06-18-2019 11:28 PM | |
| 10190 | 05-23-2019 08:46 PM | |
| 4611 | 05-20-2019 01:14 AM |
02-17-2016
06:36 PM
Parcel deployment on an existing cluster does not rely on SSH logins. Only installation of new agent hosts do, during which a form is presented to pass these credentials. The way we deploy parcels in CM is by having the CM server download it locally to its configured parcel repo disk, and then this file is sent to the agents via the HTTP service that CM runs (agents are instructed to download it from the CM server, which now acts as the repo). The entire procedure relies simply on connected agents, and involves no extra login requirements. It would be helpful if you can share a snippet of the message seen or explain your workflow more (especially if the failure you observe is consistent).
... View more
02-17-2016
06:33 PM
Spark is an inbuilt component of CDH and moves with the CDH version releases. There is no way to downgrade just a single component of CDH as they are built to work together in the versions carried. Do the extensions not work with 1.5? Are there incompatible changes you're observing (between the claimed support for 1.4.1 vs. 1.5.0) that are causing it to fail?
... View more
02-17-2016
06:31 PM
CM 5.5.3 is out now with a fix, and CDH 5.5.2 has been re-released. Read more at http://community.cloudera.com/t5/Community-News-Release/ANNOUNCE-Cloudera-Enterprise-5-5-2-and-Cloudera-Manager-5-5-3/m-p/37580#M107
... View more
02-15-2016
11:48 PM
CDH's base release versions are just that: base. The fix for the harmless log print due to HDFS-7931 is present in all CDH5 releases since CDH 5.4.1. If you see that error in context of having configured a KMS, then its a worthy one to consider. If you do not use KMS or EZs, then the error may be ignored. Alternatively upgrade to the latest CDH5 (5.4.x or 5.5.x) releases to receive a bug fix that makes the error only appear when in the context of a KMS being configured over an encrypted path. Per your log snippet, I don't see a problem (the canary does not appear to be failing?). If you're trying to report a failure, please send us more characteristics of the failure, as HDFS-7931 is a minor issue with an unnecessary log print.
... View more
02-15-2016
09:03 PM
Please use "sqoop" and not "sqoop2". The latter is a shell, but your written command is for the 1.x Sqoop features.
... View more
02-14-2016
04:43 AM
1 Kudo
Setting the "oozie.action.launcher.mapreduce.job.ubertask.enable" property to "false" via the "oozie-site.xml" Service safety valve in CM can help serve as a workaround to this bug. The fix for this is otherwise included in CDH 5.5.2 and higher (in which the workaround is no longer required).
... View more
02-11-2016
10:20 PM
The client message of "Running job" only serves to indicate that the RM has accepted the job and is trying to run it. A state of NEW -> ACCEPTED in RM indicates that the RM has found the application good to be run (all rules comply, parameters are OK, etc.), and is now awaiting scheduling of resources on the cluster. When resources get available, it will transition from ACCEPTED -> RUNNING in RM. A hanging-in-ACCEPTED state is a common problem observable on small (especially pseudo-distributed) clusters, where the yarn.nodemanager.resource.memory-mb and/or yarn.nodemanager.resource.cpu-vcores may be set too low to accept a job's resource requests. For most purposes, increasing values on the NodeManagers for these properties (and restarting NodeManagers to apply them) would resolve such an infinite-wait time issue.
... View more
02-11-2016
09:29 PM
1 Kudo
I prefer using the simpler bash syntax of using special escaped characters, if it helps: We know that ^M is the same as \r, which makes sense if you used Windows Notepad to write the commands but forgot to convert the file via dos2unix: ~> echo $'\x0d' | cat -v ^M ~> echo -n $'\x0d' | od -c 0000000 \r 0000002 (The \x0D or \x0d is the hex equivalent of \r, per http://www.asciitable.com/ (carriage return)) Therefore, you can use the $'' syntax to write a string that includes the escape: ~> hadoop fs -ls $'/a/b/c/d/20160206\r' Or, ~> hadoop fs -ls $'/a/b/c/d/20160206\x0d' This words well regardless of the terminal emulator you are using, cause we're escaping based on representation vs. by reliance on the emulator understanding the characters via input.
... View more
02-04-2016
01:30 AM
1 Kudo
Its easy to get carried away by the words "namespace separation" and what it truly means in federation setup. Its not as simple as it sounds. If you have 3 NNs, you will have 3 NNs with "/" each. Its not 3 NNs with "/hbase" on all of them and files distributed automatically among them. ViewFS is a client-sided FS that can talk to the 3 NNs, and rewrite the path locally. That is, you can choose to mount / of NN1 as /hbase1, / of NN2 as /hbase2 and / of NN3 as /hbase3, but not bind all of them to a common root (/). When you then ask for viewfs:///hbase1/foo, it internally goes to NN1 looking for just /foo. NN1 does not know what "/hbase1" is, only the client (viewfs) does, which translates/brokers the request for you automatically. Simply said, no - such a setup with a single HBase cluster over 3x NNs is not possible. Even if you did provide some form of viewfs:// as the root directory in HBase config, it would fail cause of viewfs restrictions around moving paths between mount-point paths.
... View more
01-28-2016
12:40 AM
1 Kudo
You are looking for HBase's request-throttling features: http://blog.cloudera.com/blog/2015/05/new-in-cdh-5-4-apache-hbase-request-throttling/
... View more