Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1025 | 06-04-2025 11:36 PM | |
| 1582 | 03-23-2025 05:23 AM | |
| 798 | 03-17-2025 10:18 AM | |
| 2863 | 03-05-2025 01:34 PM | |
| 1874 | 03-03-2025 01:09 PM |
07-24-2019
11:00 PM
1 Kudo
@jessica moore This API call should do the magic, remember to substitute the values with your actual cluster values curl -u {ambari-username}:{ambari-password} -H "X-Requested-By: ambari" -X GET http://{ambari-host}:{ambari-port}/api/v1/clusters/{clustername}/services Hope that helps please revert
... View more
07-12-2019
11:46 PM
@Habtamu Wubneh Can you check your Java environment $ which -a java Are you executing the java command as a non-root user? /var/log/hadoop/hdfs/jsvc.out’ for reading: Permission denied What are the permissions and ownership? Modify the JAVA_HOME value in the hadoop-env.sh file: export JAVA_HOME=/usr/java/default Start the data node su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode" HTH
... View more
07-01-2019
07:01 AM
@Spandan Mohanty Both log files indicate that the DNS is the problem could you verify that the DNS is running! start-timeline-service-v2-0-reader.txt The DNS server may be temporarily unavailable, or there could be a network problem. resource-manager-yarn.txt Your request could not be processed because an error occurred contacting the DNS server. Please revert
... View more
07-01-2019
06:46 AM
1 Kudo
@Michael Bronson The most probable explanation is one of your Kafka brokers is down. Please, could you check the active Kafka brokers? $ ./zookeeper-shell.sh localhost:2181 <<< "ls /brokers/ids" And the output should be similar to the one below: Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /brokers/ids
[0, 1, 2, 3, 4 ]
[zk: localhost:2181(CONNECTED) 1] If you get less than in the above investigate why that particular broker is down! What is your current replication factor? You should also check the offline partitions metric to confirm this, but also check whether all brokers are functioning normally. I would also suggest increasing the replication-factor since you have a multi-broker configuration.
... View more
06-29-2019
06:48 PM
@Hamilton Castro The simple and clear answer is "YES" !! HDFS Snapshots are read-only point-in-time copies of the file system. They can be taken on any level of the file system. Snapshot is valuable as a backup or for Business continuity plans as a Disaster recovery option. The concept of a snapshot can be considered Point-in-Time [PIT] backup, which is wrong if you had a 5TB the snapshot will not be the same size, an HDFS snapshot is not a full copy of the data, rather a copy of the metadata at that point in time. Blocks in data nodes are not copied: the snapshot files record the block list and the file size. There is no data copying (more accurately a new record in the inode). It's only on modifications (appends and truncates for HDFS) that record any data. The snapshot data is computed by subtracting the modifications from the current data. The modifications are recorded in chronological order, so that the current data can be accessed directly. To take snapshots, the HDFS directory has to be set as a snapshot table. If there are snapshots in a snapshottable directory, the directory cannot be deleted nor renamed. So when you first take a snapshot, your HDFS storage usage will stay the same. It is only when you modify the data that data is copied/written. Copying data between clusters or storage systems, copying a snapshotted file is no different than copying a regular file - they both will copy the same way, with bytes and with metadata. There's no "copy only metadata" operation.
... View more
06-28-2019
11:40 AM
@Spandan Mohanty The below are the errors you are encountering while starting the HDFS/YARN 2019-06-28 14:58:44,564 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://nodetwo:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From nodetwo/172.16.217.206 to nodetwo:8020 failed on connection exception: java.net.ConnectException: Connection refused; Network Error (dns_server_failure) Your request could not be processed because an error occurred contacting the DNS server.The DNS server may be temporarily unavailable, or there could be a network problem. Please do the following while logged on as hdfs assuming you are the root user # su - hdfs
$ hdfs dfsadmin -safemode get The above should confirm the namenode îs in safe mode $ hdfs dfsadmin -safemode leave validate that safe mode is off $ hdfs dfsadmin -safemode get Then restart the hdfs /YARN from Ambar that should resolve the issue
... View more
06-28-2019
06:26 AM
1 Kudo
@Spandan Mohanty Can you share the Timeline Service V2.0 Reader / YARN logs?
... View more
06-26-2019
04:49 PM
@Michael Bronson Here is an article by HWX support "unable to read additional data from client session id" you could try check your log4j.properties to adjust those warnings.
... View more
06-26-2019
08:55 AM
With the merger, we expected that sooner or later, the HCC is more vibrant than the Cloudera community how will solutions and KB's note and even the leaderboard scores be merged with the new platform? Can we also have a preview of the new CDP and access may be a restricted release of the new platform? Many questions but few answers it would be lovely to have a new track for CDP as we are oblivious of the date of release !
... View more
06-26-2019
01:58 AM
2 Kudos
@Manish thakur AFAIK, data is not removed from a DataNode when you decommission it. Further writes on that DataNode will not be possible though. When you decommission a DataNode, the replicas held by that DataNode are marked as "decommissioned" replicas which are still eligible for read-access. Generally speaking, decommissioning the dataNode stops the NameNode from putting new blocks on the DataNode being decommissioned while redistributing them so you are never under-replicated. After the decommissioning is done all copies of blocks on the decommissioned data node would have been replicated to the other nodes, remember the data is not deleted but replicated so you will need to format the Filesystem to completely clean the decommissioned data node.
... View more