Member since
10-01-2018
802
Posts
143
Kudos Received
130
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3064 | 04-15-2022 09:39 AM | |
| 2471 | 03-16-2022 06:22 AM | |
| 6539 | 03-02-2022 09:44 PM | |
| 2904 | 03-02-2022 08:40 PM | |
| 1910 | 01-05-2022 07:01 AM |
10-20-2020
12:34 PM
@saudcibc There is no minor release of doc means there will be 6.2 doc which will valid to 6.2.x versions. https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/administration.html Only minor version release doc will be available the way you are looking: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_6_release_notes.html#cm6_release_notes
... View more
10-20-2020
12:17 PM
@manojdara Not from API but just for the information you can check the information here in this doc: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_63_packaging.html#cdh_630_packaging CDH 6.3.3 Packaging Component Component Version Apache Avro 1.8.2 Apache Flume 1.9.0 Apache Hadoop 3.0.0 Apache HBase 2.1.4 HBase Indexer 1.5 Apache Hive 2.1.1 Hue 4.4.0 Apache Impala 3.2.0 Apache Kafka 2.2.1 Kite SDK 1.0.0 Apache Kudu 1.10.0 Apache Solr 7.4.0 Apache Oozie 5.1.0 Apache Parquet 1.9.0 Parquet-format 2.4.0 Apache Pig 0.17.0 Apache Sentry 2.1.0 Apache Spark 2.4.0 Apache Sqoop 1.4.7 Apache ZooKeeper 3.4.5
... View more
10-20-2020
12:04 PM
@emeric Looking at your logs seems to be a port issue and you have to check if port is listening or not.
... View more
10-16-2020
11:53 AM
@Yuriy_but Please follow below doc for NN heap calculations. https://docs.cloudera.com/documentation/enterprise/latest/topics/admin_nn_memory_config.html Also there is another thread nicely explained the same thing here: https://community.cloudera.com/t5/Support-Questions/How-a-NameNode-Heap-size-is-calculated/td-p/227052
... View more
10-12-2020
08:19 AM
yes , job is failed... ERROR tool.ImportTool: Error during import: Import job failed!
... View more
10-07-2020
12:11 AM
@GangWar Thank you for your reply. I confirmed I can manually upgrade the agent in each host , which is also written in installation manual as option 2. Upgrade the Agents using the Command Line. Just a little bit I have a curiosity of the reason not working with CM distributed method, but anyway I'm going forward steps. Best regards, Tak568
... View more
10-06-2020
11:42 PM
@atl-techdev Cloudera don’t support Oracle Database for Nifi Registry officially. Please check the matrix here: https://docs.cloudera.com/cfm/2.0.4/support-matrix/topics/cfm-supported-databases.html
... View more
10-06-2020
05:05 AM
1 Kudo
@Ananya_Misra Namenode restart failed with below exception : 'org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/hdfs/namenode is in an inconsistent state: previous fs state should not exist during upgrade. Finalize or rollback first.' This issue occurs because, during an upgrade, NameNode creates a different directory layout under NameNode directory, if this is stopped before Finalize upgrade, NameNode directory will go to an inconsistent state. NameNode will not be able to pick the directory with a different layout. To resolve this issue start Namenode manually through command line with the -rollingUpgrade started option and proceed with the upgrade: # su -l hdfs -c '/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode -rollingUpgrade started
... View more
10-06-2020
04:22 AM
@Mondi As you know Atlus is not packaged with Cloudera 6.x version so this is very tedious to install on same. Also if you managed to install the Atlus even this can not guarantee to work with Cloudera 6.x version as there are so many dependencies and Configurations inbuilt which needs to work in CM. Anyway you can take a look at this doc to give a try: http://atlas.apache.org/2.0.0/InstallationSteps.html
... View more
10-04-2020
11:57 PM
'abort_procedure' is deprecated and not available. Seems now it needs to use 'hbck2 --bypass' in order to abort a procedure. We did not applied this because fortunately, an upgrade to CDH 6.3.3 for one cluster and a yarn patch on the other one, made a HBase restart necessary. That removed the peers on both clusters.
... View more