Member since
10-01-2018
802
Posts
143
Kudos Received
130
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3072 | 04-15-2022 09:39 AM | |
| 2474 | 03-16-2022 06:22 AM | |
| 6552 | 03-02-2022 09:44 PM | |
| 2907 | 03-02-2022 08:40 PM | |
| 1914 | 01-05-2022 07:01 AM |
10-06-2020
05:05 AM
1 Kudo
@Ananya_Misra Namenode restart failed with below exception : 'org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/hdfs/namenode is in an inconsistent state: previous fs state should not exist during upgrade. Finalize or rollback first.' This issue occurs because, during an upgrade, NameNode creates a different directory layout under NameNode directory, if this is stopped before Finalize upgrade, NameNode directory will go to an inconsistent state. NameNode will not be able to pick the directory with a different layout. To resolve this issue start Namenode manually through command line with the -rollingUpgrade started option and proceed with the upgrade: # su -l hdfs -c '/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode -rollingUpgrade started
... View more
10-06-2020
04:22 AM
@Mondi As you know Atlus is not packaged with Cloudera 6.x version so this is very tedious to install on same. Also if you managed to install the Atlus even this can not guarantee to work with Cloudera 6.x version as there are so many dependencies and Configurations inbuilt which needs to work in CM. Anyway you can take a look at this doc to give a try: http://atlas.apache.org/2.0.0/InstallationSteps.html
... View more
10-06-2020
01:00 AM
1 Kudo
@Ananya_Misra This seems your curl command is not working. In this case there might be a possibility Curl installed in the system doesn't support. In this case, curl command in the OS prompt returned the following output: [hdfs@namenode_hostname ~]$ curl --negotiate -u : -s "http://namenode_hostname:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem" curl: option --negotiate: the installed libcurl version doesn't support this curl: try 'curl --help' or 'curl --manual' for more information [hdfs@namenode_hostname data]# which curl [hdfs@namenode_hostname data]# curl --version So I would suggest you to manually run above curl command to see if that works and what is the output. So you might need to uninstall curl package in the nodes and and install the same from the default OS packages. Or change $PATH so that the OS default curl is used but make sure to verify with above command first. Also if you are an entitled Cloudera Customer feel free to open a case with Cloudera so that you can get prompt assistance.
... View more
09-30-2020
07:54 AM
@farouk You have to set those in hdfs-site.xml. For example: HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml
HDFS (Service-Wide)
View as XML
Name
dfs.client.datanode-restart.timeout
Value
30
Description
Description
Final Or something like below: <property><name>dfs.client.datanode-restart.timeout</name><value>30</value></property>
... View more
09-30-2020
03:47 AM
2 Kudos
@Cluster-CDP You can follow below Cloudera Recommendation about Role distribution as per number of nodes in the cluster. Though in your case this seems a best fit for me. https://docs.cloudera.com/cloudera-manager/7.0.3/installation/topics/cdpdc-runtime-cluster-hosts-role-assignments.html Master Hosts Utility Hosts Gateway Hosts Worker Hosts Master Host 1: NameNode YARN ResourceManager JobHistory Server ZooKeeper Kudu master Spark History Server One host for all Utility and Gateway roles: Secondary NameNode Cloudera Manager Cloudera Manager Management Service Hive Metastore HiveServer2 Impala Catalog Server Impala StateStore Hue Oozie Flume Gateway configuration HBase backup master Ranger Admin, Tagsync, Usersync servers Atlas server Solr server (CDP-INFRA-SOLR instance to support Atlas) Streams Messaging Manager Streams Replication Manager Service ZooKeeper 3 - 10 Worker Hosts: DataNode NodeManager Impalad Kudu tablet server
... View more
09-29-2020
06:21 AM
@dallanic Sure. Keep posted here and don't forget to mark this post as solution so that this will help other members as well.
... View more
09-29-2020
04:52 AM
1 Kudo
@dallanic It is indicating that the procedure that was run to remove the entry got hung for xyz reasons. So there might be some issue with hosts/disk/memory/table itself. You have to do a clean perform I would say. Try to kill hung process and then do.
... View more
09-29-2020
03:21 AM
2 Kudos
@Pavel_kostyukov You have to install on the need basis, if you need them then you can install by following below document. https://docs.cloudera.com/documentation/enterprise/latest/topics/impala_udf.html#udf_demo_env
... View more
09-29-2020
01:34 AM
@dallanic Is there any error message you are getting? You might want to look at various commands and try again, below doc can be useful. https://learnhbase.net/2013/03/02/hbase-shell-commands/
... View more
09-29-2020
01:18 AM
1 Kudo
@b1995assuncao I don't think it's possible. The range is dynamic and by default always will be in latest time range.
... View more