Member since
02-16-2017
20
Posts
1
Kudos Received
0
Solutions
02-19-2021
05:50 AM
hi, when i upgraded to cdh 6.2.0, the namenodes stopped exposing metrics on 9070. do anyone know where the metrics are exposed now?
... View more
Labels:
02-11-2021
10:54 PM
Yes, you can set up a local mirror for offline installation. It is possible with CDP version and with older CDH 5 and 6 version. But even theses legacy versions are behing paywall now.
... View more
02-10-2021
08:48 AM
Hi @sbn I think you will be able to find a relevant, previously-posted answer to a question very similar to yours in this thread: https://archive.cloudera.com/cm6/6.2.0/ubuntu1604/ displaying 404 error Hope this helps.
... View more
12-08-2020
06:54 AM
i have a long running job, that has survived some upgrades (os and older cdh versions) the command executed: /usr/bin/hadoop jar /opt/cloudera/parcels/CDH/lib/hbase/hbase-server.jar importtsv the exception this produces: Exception in thread "main" java.lang.ClassNotFoundException: importtsv it seems that importtsv is not there? however i have not been able to locate the class responsible for this.
... View more
Labels:
04-29-2020
05:09 AM
@sbn NO, there is no any such public facing doc which can show you the future road map. The only matrix you can see about those versions is available publicly. https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_os_requirements.html#c63_supported_os https://docs.cloudera.com/cdpdc/7.0/release-guide/topics/cdpdc-os-requirements.html
... View more
03-11-2020
05:44 AM
CDH: 5.13.0
HIVE version:
VER_ID | SCHEMA_VERSION | VERSION_COMMENT | SCHEMA_VERSION_V2 --------+----------------+----------------------------+------------------- 1 | 1.1.0 | Hive release version 1.1.0 | 1.1.0-cdh5.12.0
when trying to run the 'Upgrade Hive Metastore Database Schema' command
through the GUI (CDH) it reports the following:
(STDOUT)
Starting upgrade metastore schema from version 1.1.0-cdh5.12.0 to 1.1.0-cdh5.13.0 schemaTool completed Exit code: 0
However (STDERR) reports sort of the opposite:
INFO metastore.CDHMetaStoreSchemaInfo: Current version is higher than or equal to 1.1.0-cdh5.12.0 Skipping file 1.1.0-to-1.1.0-cdh5.12.0
i want to be on cdh5.13.0 but hive is thinking it's in earlier version.
did i miss a migration/upgrade step somewhere to cause this?
does anybody know how to force an upgrade to 5.13.0?
... View more
- Tags:
- CDH 5.13
- schematool
Labels:
10-24-2019
12:08 AM
thanks, i suppose one could just move that to wherever you want the home to be. this was part of a hack deployment, so ultimately not needed.
... View more
08-14-2019
03:44 AM
on this guide: https://www.cloudera.com/documentation/enterprise/5-3-x/topics/search_hdfsfindtool.html
there is a reference in the -mtime option:
Evaluates as true if the file modification time subtracted
from the start time is n days
what does the "start time" in here reference?
it does not seem to work like normal bash find.
i want functionality like:
if mtime is more than 30 days ago (from current time), print the folder/file.
... View more
Labels:
06-20-2019
01:37 AM
thanks for your answer, do i need the .meta and .meta.tmp files?
... View more
11-09-2018
02:32 AM
It is a fluentd issue, have you checked fluentd documentation ? Take a look to https://github.com/fluent/fluent-plugin-webhdfs : <match access.**>
@type webhdfs
namenode master1.your.cluster.local:50070
standby_namenode master2.your.cluster.local:50070
path /path/on/hdfs/access.log.%Y%m%d_%H.log
</match>
... View more
06-28-2018
07:45 AM
@sbn yes it requires a cluster restart. But not only this option, whatever the option that you are going to try, it requires a cluster restart, so that namenode can understand the recent configuration change for the journal node. Regarding copying journal node edit dir, was it mentioned anywhere in the link that i've provided? if not, you can ignore that. if so, you can follow the below link (but this is just for reference, i think you don't need to do this) https://www.cloudera.com/documentation/enterprise/5-13-x/topics/cm_mc_jn.html#concept_cqn_dxp_rs
... View more
06-28-2018
04:06 AM
thanks, this is what worked in the end, coupled with kill -9 for the really resilient procs.
... View more
04-25-2018
06:18 AM
UPDATE: did some experimentations on my own, and i deleted the pkg_resources, in /usr/lib/python2.7/dist-packages this apparantly fixed the issue as hue will now run, and allow me to log in. i am not sure if an update would also work...
... View more
04-05-2017
04:12 AM
1 Kudo
i want to put a 1:1 elastic cluster inside my hadoop cluster, 1 elastic node on each hadoop datanode. to not interfere too much with the hadoop cluster, i would like to run the elastic nodes on a disk of it's own. the setup: CDH 5.6.0 datanode disk layout(in mounted dirs): /data/disk1 . . . /data/disk10 20+ data nodes say i wish to remove disk10 from each datanode, how do i do that without data loss? removing the disk on a decommissioned datanote, and later recomissioning it, takes too long time. - any hint on making this process faster? can i use the rebalancer? ( i saw there is internal datanode balancer in CDH5.8)
... View more
Labels: