Member since
01-19-2017
3678
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 664 | 06-04-2025 11:36 PM | |
| 1240 | 03-23-2025 05:23 AM | |
| 612 | 03-17-2025 10:18 AM | |
| 2260 | 03-05-2025 01:34 PM | |
| 1461 | 03-03-2025 01:09 PM |
04-15-2023
12:37 PM
@harry_12 Can you share the link for the download of the sandbox? I want to try it
... View more
04-14-2023
10:39 AM
@harry_12 Can you share the configs ie Memory /Cores allocated to your Sandbox and share the link for the download I will test that and document my process
... View more
04-14-2023
03:34 AM
@harry_12 Sounds familiar is the first time running VB is virtualization enabled on the host? Or have you simply tried re-installing it? If you are the type who loves to deep dive here is good documentation on Result Code E Fail 0x80004005 I am sure that should help out
... View more
04-09-2023
09:06 AM
@YasBHK File /user/hdfs/data/file.xlsx could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation. So that means your data node is down can you restart the HDFS service and retry?
... View more
04-09-2023
03:26 AM
@YasBHK Please ensure both data nodes (2) are running. You definitely have an issue with one of the data nodes and because of your replication factor which I guess is 2 from the output the file /user/hdfs/data/file.xlsx can't be persisted if it can't meet the min replication of 2. Firstly understand why the second data node has been excluded by YARN either its space related issue or it just isn't started. Please check the dfs.hosts.exclude location usually in HDP /etc/hadoop/conf/dfs.exclude remove the host in the file and run the below hdfs dfsadmin -refreshNodes or from Ambari Ui just run the refresh nodes That should resolve the issue. Restart the faulty datanode and your HDFS put command will succeed
... View more
04-08-2023
03:21 PM
1 Kudo
@AbuSaiyeda can you do the following and revert if you still get issues Backup the ambari server properties file cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties.ORIG # Change the timeout of the ambari server echo 'server.startup.web.timeout=120' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.acquisition-size=5' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.max-age=0' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.max-idle-time=14400' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.max-idle-time-excess=0' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.idle-test-interval=7200' >> /etc/ambari-server/conf/ambari.properties Restart Ambari and monitor ,please let me know if you need further help
... View more
04-07-2023
04:15 PM
@hassenseoud Triggering log roll on remote NameNode hdpmaste r2/192.168.1.162:8020 2016-10-24 11:25:52,108 WARN ha.EditLogTailer (EditLogTailer.java:triggerActiveLogRoll(276)) - Unable to trigger a roll of the active NN org.apache.hadoop.ipc.RemoteException (org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby Check the active NN $ hdfs haadmin -getServiceState <serviceId> : dfs.ha.namenodes.mycluster in hdfs-site.xm $ hdfs haadmin -getServiceState namenode2 active Example $ hdfs haadmin -getServiceState namenode1 standby Action 2 Shutdown whichever of the above was/is the standby from Ambari ensure its stopped. Action 3 From Ambari do a rolling restart of the zk forum, wait untill all 3 or x are restarted Action 4 Execute sequentially the below cmd's $ hdfs dfsadmin -safemode enter $ hdfs dfsadmin -saveNamespace $ hdfs dfsadmin -safemode leave Restart the JN and the above NN service when all is green then you can safely start the standby
... View more
04-07-2023
12:47 PM
@Sanchari I suspect the java.io.IOException: Mkdirs failed to create is due to permissions on the edge-node Assuming you are the HDFS copy is being run as hdfs and your edge node directory belongs to a different user/group that. Just for test purposes can you do the following on the edgenode # mkdir -p /some_specific/path/in_edge_server Then run chmod on the destination path # chmod 777 /some_specific/path/in_edge_server Finally, rerun you spark-submit and let me know
... View more
04-07-2023
01:14 AM
@Sanchari It could be good to share a snippet of your code. logically I think you copy FROM -->TO Below is the function being used: fs.copyFromLocalFile(new Path(src_HDFSPath), new Path(dest_edgePath)) Disclaimer I am not a Spark/Python developer
... View more