Access the Spark UI: Open the Spark UI in your web browser.
Identify Nodes: Navigate to the Executors tab to view information about the driver and executor nodes involved in the Spark application.
Determine Log Directory: Within the Spark UI, find the Hadoop settings section and locate the value of the yarn.nodemanager.log-dirs property. This specifies the base directory for Spark logs on the cluster.
Access Log Location: Using a terminal or SSH, log in to the relevant node (driver or executor) where the logs you need are located.
Navigate to Application Log Directory: Within the yarn.nodemanager.log-dirs directory, access the subdirectory for the specific application using the pattern application_${appid}, where ${appid} is the unique application ID of the Spark job.
Find Container Logs: Within the application directory, locate the individual container log directories named container_{$contid}, where ${contid} is the container ID.
Review Log Files: Each container directory contains the following log files generated by that container: