Member since
04-19-2018
13
Posts
1
Kudos Received
0
Solutions
10-05-2020
11:27 PM
In the CDP DC cluster, once you use the AMToCM migration script to migrate all configurations, you will not be able to fetch a yarn application logs from the command line when using the following command:
$yarn logs -applicationId application_160120642 WARNING: YARN_OPTS has been replaced by HADOOP_OPTS. Using value of YARN_OPTS. No class configured for IndexedFormat Can not find any log file matching the pattern: [ALL] for the application: application_1601206427676_0011 Can not find the logs for the application: application_1601206427676_0011 with the appOwner: hive
This issue occurs because the yarn.log-aggregation.file-formats gets updated with the IndexedFormat configuration during the AMtoCM migration.
To resolve this issue, change it to the ifile and Tfile.
Once you change the configuration as mentioned in the above image, save and restart the yarn service, you will able to fetch the yarn logs using the command line.
Reference
Log aggregation properties
Log Aggregation File Controllers
... View more
Labels:
07-26-2018
08:37 PM
Default location for spark event log history is hdfs:///spark-history (Spark) and hdfs:///spark2-history/ (spark2).
This will help to debug spark-history page load issue or if you have huge count of event log files you can archive it by creating the new active location.
Following are the steps to change this default location.
1. Create new directory on hdfs for e.g
$hdfs dfs -mkdir /spark2-history_new
$hdfs dfs -chown spark:hadoop /spark2-history_new
2. Login to Amabri==>Spark==>config.
3. Update following parameters with new path "hdfs:///spark2-history_new/ " as follow.
4. Save the configuration.
5. Restart Spark service to enable new changes.
6. Run spark job, your new event log file will get save in new location. Same you can view using the spark history UI.
... View more
Labels:
07-05-2018
09:36 PM
1 Kudo
Post HDP upgrade you will observe all previous
version HDP directories will be exist under /usr/hdp/ directory (Including current
active). Once upgrade is finalized you can clear those older version directories using Ambari. use following "curl" command.
Ssh to ambari server and
run following command. (You can use any linux box in network which has curl installed and able to access ambari console URL). curl
'http://<$AMBARI_SERVER>:8080/api/v1/clusters/<$MyClusterName>/requests'
-u admin:<Password> -H "X-Requested-By: ambari" -X POST
-d'{"RequestInfo":{"context":"remove_previous_stacks",
"action" : "remove_previous_stacks", "parameters"
: {"version":"<$Current_Active_version>"}},
"Requests/resource_filters":
[{"hosts":"<$ClusterNode1>, <$ClusterNode2>"}]}'
$MBARI_SERVER => Ambari server name and port. If you are using SSL use the SSL port no. $MyClusterName => Enter cluster name you want to remove directories. $
Current_Active_version => Enter current active version. (It will remove all
previous versions directory.) It will remove all older versions of mentioned current. $ClusterNode1 => Enter your cluster node HostName/IP. (Using hostname is recommended) for e.g curl 'http://myambariserver.test.com:8080/api/v1/clusters/myprodcluster1/requests'
-u admin:xxxxxx -H "X-Requested-By: ambari" -X POST
-d'{"RequestInfo":{"context":"remove_previous_stacks",
"action" : "remove_previous_stacks", "parameters"
: {"version":"2.6.1.0-129"}},
"Requests/resource_filters": [{"hosts":"datanode1.test.com,
datanode2.test.com,mstr1.test.com"}]}' Output: {
"href" : "http:// myambariserver.test.com:8080/api/v1/clusters/myprodcluster1/requests/830",
"Requests" : {
"id" : 830,
"status" : "Accepted" } } 2. Track
the progress for each cluster nodes from running operations. Login to Ambari console=>click on ops (Right
corner near cluster name). You will see running operation "remove_previous_stacks". NOTE: If you have long list of cluster nodes, you can specify list of nodes in one file and provide full path of file to the host directive. for e.g. [{"hosts":"/var/tmp/clusternodes1.txt"}]}'
... View more
Labels: