Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1969 | 07-09-2019 12:53 AM | |
| 11877 | 06-23-2019 08:37 PM | |
| 9141 | 06-18-2019 11:28 PM | |
| 10126 | 05-23-2019 08:46 PM | |
| 4576 | 05-20-2019 01:14 AM |
06-23-2015
10:58 AM
1 Kudo
There's two factors to consider here: Authentication and Authorisation. You've enabled both for HBase. You can disable the latter if you do not need it. If you do need authorisation, then you need to configure it as out of the box there's no rules except 'administrative' rights for the 'hbase' login user. To read more on configuring your authorisation rules via the grant/revoke commands, read http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_sg_hbase_authorization.html
... View more
06-22-2015
04:14 PM
CM separates server-side configs and client-side configs. The specific property of "yarn.log-aggregation-enable" is only used by NodeManagers as a toggle. Clients do not use the above property, so they aren't found in the regular /etc/hadoop/conf/*.xml path configs (these are gateway configs, or client-configs). When your Java class runs from within a YARN container, it also inherits the parent environment (parent being the NodeManager here). Thereby, it sees the same paths the NM service does, which would explain your output of true when that happens. YARN, speaking on its role as an application framework, provides no notion of 'application configuration', and expects custom applications to roll their own solution. For example, the MR2 app uses the concept of a 'job.xml' written and sent from clients that is then utilised as the single source of configuration truth (rather than sourced from the environment from the random NM the AM/Tasks may run on). Does this help resolve the confusion?
... View more
06-21-2015
05:38 AM
You can use the CM -> HBase -> Configuration -> RegionServer Safety Valve (for hbase-site.xml) field to reorder the coprocessors via a manual XML override.
... View more
06-21-2015
05:36 AM
1 Kudo
You (or your used software) appear to be using appends on files that are being modified in parallel by other concurrent jobs/workflows/etc.. HDFS uses a single-writer model for its files, so observing this error is normal if your software does not have logic handling it and waiting for a proper writer lease to perform its work. Without audit logs of the filenames involved, there's little more we can tell. We also advise against using appends unless you absolutely require it for your use-cases.
... View more
05-27-2015
09:36 PM
Yes, it is an administrative limit, so it cannot be changed per job.
... View more
05-27-2015
09:20 PM
If this is an MR1 question, then the config mentioned is a server-side limit, and cannot be overridden at a per-job level. You will need to raise it on the JobTracker config and restart the JobTracker to set it into effect.
... View more
05-19-2015
03:19 AM
1 Kudo
You do not need a SecondaryNameNode in HA. Please delete the role to resolve your issue. You can read the HA Architecture at http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Hardware_resources to further understand how HA works.
... View more
05-11-2015
08:44 PM
If you cannot get the admins to provide your app with a near-infinite ticket lifetime, you need to instead loop the login process via a daemon thread, such as via this thread snippet as an example: private void runLoginAndRenewalThread(final Configuration conf) throws IOException { // Do a blocking login first SecurityUtil.login(conf, KEYTAB_CONF, KEYTAB_USER); // Spawn the relogin thread next Thread reloginThread = new Thread() { @Override public void run() { while (true) { try { SecurityUtil.login(conf, KEYTAB_CONF, KEYTAB_USER); Thread.sleep(ELEVEN_MINUTES); } catch (IOException e) { e.printStackTrace(); interrupt(); } catch (InterruptedException e) { e.printStackTrace(); interrupt(); } } } }; reloginThread.setDaemon(true); reloginThread.start(); }
... View more
05-06-2015
04:11 AM
The last specified -Xmx takes precedence, normally, in Oracle/Sun JREs. What evidence in the failure logs is suggesting that the mapper starts only with 200m heap instead?
... View more
05-04-2015
06:28 AM
The block mapping of block ID <=> DN location is not stored in the fsimage of HDFS, it is just kept in the running NameNode memory. Since the primary work of the block report is to report statuses of availability of each block replica on the DNs, and the checkpoint's work is to persist the namespace and namespace-associative states (such as snapshots, etc.), the checkpoint can be an operation done without problems in parallel as their two information structures will not conflict with one another.
... View more