Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 878 | 06-04-2025 11:36 PM | |
| 1450 | 03-23-2025 05:23 AM | |
| 728 | 03-17-2025 10:18 AM | |
| 2612 | 03-05-2025 01:34 PM | |
| 1733 | 03-03-2025 01:09 PM |
05-05-2018
11:51 AM
1 Kudo
@Raj ji I think your symlink is broken please recreate the symlink and re-test ! It should look like this lrwxrwxrwx 1 root root 23 Oct 19 2017 /usr/hdp/2.6.3.0/hive/conf -> /etc/hive/2.6.3.0/0 To create a new symlink (will fail if symlink exists already): ln -s /path/to/file /path/to/symlink
To create or update a symlink: ln -sf /path/to/file /path/to/symlink Hope that helps
... View more
05-05-2018
11:37 AM
@Sim kaur For sure all was working when you had 3 datanodes !!! The default replication factor is 3 so if you delete 2 out of 3 my HDFS data nodes, that literally means you have ONLY one copy of your file. With 6 nodes you could have a setup like this 2 Master node 3 Datanodes(every datanode should have a node manager default) I Edge Node (Low-end node ) You should have a least 3 zookeeper servers running and a client on each node! When you are not running NameNode HA you will see NN,SNN running on the same node the SNN daemon is only an NN helper for merging the edits and fsimage, it offloads the task of merging from the NN but if you plan to have High Availability then you should configure a real NameNode HA the primary and standby NameNodes MUST run on 2 different node !! There is no better document than the HWX Multi-home-cluster but from your setup you are running CDH, I don't thisnk there is a big difference. Please set add the custom hdfs-site properties check the HDFS configuration parameters in CDH. . dfs.client.use.datanode.hostname=true In your previous post above can you explain me the 3 4 And please revert
... View more
05-05-2018
08:57 AM
@Subramanian Govindasamy Can you check the /etc/hosts entries on all the nodes ? and do the following with the ambari-agents on the affected node move the move /var/lib/ambari-agent/data/structured-out-status.json to /tmp and restart the ambari-agent. # ambari-agent restart Do you see any error/exception in the /var/log/ambari-server/ambari-server.log?
... View more
05-05-2018
08:17 AM
@Sim kaur Problem The application log file shows: 74865 millis timeout while waiting for channel to be ready for connecting: java.nio.channels.SocketChannel[connected local=/172.31.4.192:42632 remote=/172.31.4.192:50010]. 74865 millis timeout left. All nodes are connected to each other via an internal switch, which is a subnet of 172.31.4.x. This network is not open to public access. Cause Each node in the Hadoop cluster has an internal IP (through an internal switch) and external IP address, used to communicate with clients and external apps. Hadoop cluster by using the internal IP addresses. According to the description, this is caused by the multi-homed cluster. Solution In this case, in the hdfs-site.xml file a property dfs.client.use.datanode.hostname is set. This is the parameter that should force a client to retrieve a hostname instead of an IP address and perform its own lookup of the hostname to get a routable path to that host. To solve this, add the following line into the custom hdfs-site properties. dfs.client.use.datanode.hostname=true Hope that help please revert
... View more
05-04-2018
08:59 PM
@Subramanian Govindasamy Tha means the HDFS is down can you start it from Ambari UI or CLI?
... View more
05-04-2018
12:54 PM
@Subramanian Govindasamy Seem you have problems with your Auth-to-local Rules please validate? ""message": "Invalid value for webhdfs parameter" The conclusion is: the username used with the query is checked against a regular expression and, if not validated, the above exception is returned. The default regular expression being: ^[A-Za-z_][A-Za-z0-9._-]*[$]?$ Can you start the namenode manually, su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" Please revert
... View more
05-04-2018
07:36 AM
@harsha vardhan bandaru Apache Kafka is an open-source stream-processing software platform developed by the Apache Software Foundation written in Scala and Java. Confluent Platform includes Apache Kafka but also includes few things that can make Apache Kafka easier to use:
... View more
05-01-2018
07:39 PM
2 Kudos
@Michael Bronson You can safely delete them
... View more
05-01-2018
11:07 AM
@Michael Bronson Yes that should delete the corrupt blocks notice the space between the / and -delete or simply using the -rm option see below hdfs fs -rm /path/to/file/with/permanently/missing/blocks To delete the first missing block in the case of your, output above this will be rebalanced with time or run manually the balancer i.e hdfs fs -rm /localF/STRZONEZone/intercept_by_country/2018/4/10/16/2018_4_10_16_45.parquet/part-00003-8600d0e2-c6b6-49b7-89cd-ef2a2bc1dc5e.snappy.parquet Hope that clarifies
... View more