Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2729 | 04-27-2020 03:48 AM | |
| 5287 | 04-26-2020 06:18 PM | |
| 4455 | 04-26-2020 06:05 PM | |
| 3580 | 04-13-2020 08:53 PM | |
| 5381 | 03-31-2020 02:10 AM |
08-31-2017
08:25 AM
@Anurag Mishra What is the output of the following command? # file cluster_configs.tar.gz
cluster_configs.tar.gz: gzip compressed data, from FAT filesystem (MS-DOS, OS/2, NT) . If your "file" command output is different then this (or like "data") then it means your binary is not downloaded properly. You can in that case try to use the same link from a browser where you are already logged in to ambari OR double check your curl command (Delete the previously downloaded file) http://localhost:8080/api/v1/clusters/Sandbox/components?format=client_config_tar
... View more
08-31-2017
06:59 AM
1 Kudo
@Anurag Mishra Please do not use the "-iv" option with curl command .. as it will cause additional junk data to be added to the tar.gz file. Please try the following command: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/Sandbox/components?format=client_config_tar -o /tmp/All_Configs/cluster_configs.tar.g Output: [root@sandbox All_Configs]# curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/Sandbox/components?format=client_config_tar -o /tmp/All_Configs/cluster_configs.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 48887 0 48887 0 0 21626 0 --:--:-- 0:00:02 --:--:-- 21631
[root@sandbox All_Configs]# ls
cluster_configs.tar.gz
[root@sandbox All_Configs]# tar -xvf cluster_configs.tar.gz
SPARK2_CLIENT/./
SPARK2_CLIENT/spark-defaults.conf
SPARK2_CLIENT/spark-env.sh
SPARK2_CLIENT/spark-log4j.properties
SPARK2_CLIENT/spark-metrics.properties
TEZ_CLIENT/./
TEZ_CLIENT/tez-site.xml
TEZ_CLIENT/tez-env.sh
SLIDER/./
SLIDER/slider-client.xml
SLIDER/hdfs-site.xml
SLIDER/yarn-site.xml
SLIDER/core-site.xml
SLIDER/slider-env.sh
SLIDER/log4j.properties
OOZIE_CLIENT/./
OOZIE_CLIENT/oozie-site.xml
OOZIE_CLIENT/oozie-log4j.properties
OOZIE_CLIENT/oozie-env.sh
SPARK_CLIENT/./
SPARK_CLIENT/spark-defaults.conf
SPARK_CLIENT/spark-env.sh
SPARK_CLIENT/spark-log4j.properties
SPARK_CLIENT/spark-metrics.properties
HDFS_CLIENT/./
HDFS_CLIENT/hdfs-site.xml
HDFS_CLIENT/core-site.xml
HDFS_CLIENT/log4j.properties
HDFS_CLIENT/hadoop-env.sh
FALCON_CLIENT/./
FALCON_CLIENT/falcon-env.sh
FALCON_CLIENT/runtime.properties
FALCON_CLIENT/startup.properties
HBASE_CLIENT/./
HBASE_CLIENT/hbase-policy.xml
HBASE_CLIENT/log4j.properties
HBASE_CLIENT/hbase-site.xml
HBASE_CLIENT/hbase-env.sh
INFRA_SOLR_CLIENT/./
INFRA_SOLR_CLIENT/log4j.properties
ZOOKEEPER_CLIENT/./
ZOOKEEPER_CLIENT/zookeeper-env.sh
ZOOKEEPER_CLIENT/log4j.properties
YARN_CLIENT/./
YARN_CLIENT/yarn-site.xml
YARN_CLIENT/yarn-env.sh
YARN_CLIENT/core-site.xml
YARN_CLIENT/log4j.properties
YARN_CLIENT/capacity-scheduler.xml
SQOOP/./
SQOOP/sqoop-env.sh
SQOOP/sqoop-site.xml
PIG/./
PIG/pig.properties
PIG/pig-env.sh
PIG/log4j.properties
MAPREDUCE2_CLIENT/./
MAPREDUCE2_CLIENT/core-site.xml
MAPREDUCE2_CLIENT/mapred-env.sh
MAPREDUCE2_CLIENT/mapred-site.xml
ATLAS_CLIENT/./
ATLAS_CLIENT/application.properties
ATLAS_CLIENT/atlas-log4j.xml
ATLAS_CLIENT/atlas-solrconfig.xml
ATLAS_CLIENT/atlas-env.sh
HIVE_CLIENT/./
HIVE_CLIENT/hive-site.xml
HIVE_CLIENT/hive-log4j.properties
HIVE_CLIENT/hive-exec-log4j.properties
HIVE_CLIENT/hive-env.sh .
... View more
08-30-2017
04:36 PM
@Anurag Mishra You can try the following API call. I tested it on Ambari 2.5.1 which provides this option to download configs. # mkdir /tmp/All_Configs
# cd /tmp/All_Configs/
# curl -iv -u admin:admin -H "X-Requested-By: ambari" -X GET http://localhost:8080/api/v1/clusters/Sandbox/components?format=client_config_tar -o /tmp/All_Configs/cluster_configs.tar.gz
. Now you can extract the "/tmp/All_Configs/cluster_configs.tar.gz" file and then you can find most of the configs. I found the following configs inside the mentioned gz file: Downloads/Sandbox\(CLUSTER\)-configs
| | | |____Sandbox(CLUSTER)-configs
| | | | |____.DS_Store
| | | | |____ATLAS_CLIENT
| | | | | |____application.properties
| | | | | |____atlas-env.sh
| | | | | |____atlas-log4j.xml
| | | | | |____atlas-solrconfig.xml
| | | | |____FALCON_CLIENT
| | | | | |____falcon-env.sh
| | | | | |____runtime.properties
| | | | | |____startup.properties
| | | | |____HBASE_CLIENT
| | | | | |____hbase-env.sh
| | | | | |____hbase-policy.xml
| | | | | |____hbase-site.xml
| | | | | |____log4j.properties
| | | | |____HDFS_CLIENT
| | | | | |____core-site.xml
| | | | | |____hadoop-env.sh
| | | | | |____hdfs-site.xml
| | | | | |____log4j.properties
| | | | |____HIVE_CLIENT
| | | | | |____hive-env.sh
| | | | | |____hive-exec-log4j.properties
| | | | | |____hive-log4j.properties
| | | | | |____hive-site.xml
| | | | |____INFRA_SOLR_CLIENT
| | | | | |____log4j.properties
| | | | |____MAPREDUCE2_CLIENT
| | | | | |____core-site.xml
| | | | | |____mapred-env.sh
| | | | | |____mapred-site.xml
| | | | |____OOZIE_CLIENT
| | | | | |____oozie-env.sh
| | | | | |____oozie-log4j.properties
| | | | | |____oozie-site.xml
| | | | |____PIG
| | | | | |____log4j.properties
| | | | | |____pig-env.sh
| | | | | |____pig.properties
| | | | |____SLIDER
| | | | | |____core-site.xml
| | | | | |____hdfs-site.xml
| | | | | |____log4j.properties
| | | | | |____slider-client.xml
| | | | | |____slider-env.sh
| | | | | |____yarn-site.xml
| | | | |____SPARK2_CLIENT
| | | | | |____spark-defaults.conf
| | | | | |____spark-env.sh
| | | | | |____spark-log4j.properties
| | | | | |____spark-metrics.properties
| | | | |____SPARK_CLIENT
| | | | | |____spark-defaults.conf
| | | | | |____spark-env.sh
| | | | | |____spark-log4j.properties
| | | | | |____spark-metrics.properties
| | | | |____SQOOP
| | | | | |____sqoop-env.sh
| | | | | |____sqoop-site.xml
| | | | |____TEZ_CLIENT
| | | | | |____tez-env.sh
| | | | | |____tez-site.xml
| | | | |____YARN_CLIENT
| | | | | |____capacity-scheduler.xml
| | | | | |____core-site.xml
| | | | | |____log4j.properties
| | | | | |____yarn-env.sh
| | | | | |____yarn-site.xml
| | | | |____ZOOKEEPER_CLIENT
| | | | | |____log4j.properties
| | | | | |____zookeeper-env.sh
.
... View more
08-30-2017
03:56 PM
@Anurag Mishra You should use extra single quotes as "'${password}'" and "'${username}'" Notice: "'${password}'" And "'${username}'" Example: # export username=test112233
# export password=test112233_password
# export Ambariuser=admin
# export ambaripass=admin
# curl -iv -u $Ambariuser:$ambaripass -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"'${username}'","Users/password":"'${password}'","Users/active":"true","Users/admin":"false"}' http://localhost:8080/api/v1/users . Output: .
... View more
08-30-2017
11:34 AM
@Anurag Mishra Now i see what do you mean by "$username" is not being replaced. . This is happening because the Environment variable properties substitution does not happen inside RAW string that is palced inside single quotes like as this is part of the Data String. -d '{"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}' . Please see the difference: (removed the single quote here)
[root@sandbox ~]# echo '{"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}'
{"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}
[root@sandbox ~]# echo {"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}
Users/user_name:test112233 Users/password:test112233_password Users/active:true Users/admin:false .
... View more
08-30-2017
11:28 AM
@Anurag Mishra Please export the variables properly as following, I see that it is working fine at my end as following: [root@sandbox ~]# export username=test112233
[root@sandbox ~]# export password=test112233_password
[root@sandbox ~]# export Ambariuser=admin
[root@sandbox ~]# export ambaripass=admin
[root@sandbox ~]#
[root@sandbox ~]# echo "curl -iv -u $Ambariuser:$ambaripass -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}' http://localhost:8080/api/v1/users"
. Example Output: [root@sandbox ~]# curl -iv -u $Ambariuser:$ambaripass -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}' http://localhost:8080/api/v1/users
* About to connect() to localhost port 8080 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8080 (#0)
* Server auth using Basic with user 'admin'
> POST /api/v1/users HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8080
> Accept: */*
> X-Requested-By: ambari
> Content-Length: 104
> Content-Type: application/x-www-form-urlencoded
>
< HTTP/1.1 201 Created
HTTP/1.1 201 Created
< X-Frame-Options: DENY
X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
X-Content-Type-Options: nosniff
< Cache-Control: no-store
Cache-Control: no-store
< Pragma: no-cache
Pragma: no-cache
< Set-Cookie: AMBARISESSIONID=prd2vhf93mq517gq7ashfb9zw;Path=/;HttpOnly
Set-Cookie: AMBARISESSIONID=prd2vhf93mq517gq7ashfb9zw;Path=/;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
Expires: Thu, 01 Jan 1970 00:00:00 GMT
< User: admin
User: admin
< Content-Type: text/plain
Content-Type: text/plain
< Content-Length: 0
Content-Length: 0
< Server: Jetty(8.1.19.v20160209)
Server: Jetty(8.1.19.v20160209)
<
* Connection #0 to host localhost left intact
* Closing connection #0
.
... View more
08-30-2017
11:25 AM
@Anurag Mishra Before running the actual Curl command please first check if the values are being substituted properly or not? You can do so by just putting an "echo" before the curl command as following: # echo "curl -iv -u $Ambariuser:$ambaripass -H "X-Requested-By: ambari" -X POST -d '{"Users/user_name":"$username","Users/password":"$password","Users/active":"true","Users/admin":"false"}' http://$localhost:8080/api/v1/users" .
... View more
08-29-2017
04:07 PM
@Dominik Ludwig You can extract more information's from the host if you do not apply the Fields filter: # curl -sH "X-Requested-By: ambari" -u admin:admin http://$AMBARI_SERVER:8080/api/v1/hosts/$HOST_TO_MONITOR . Please replace the "$HOST_TO_MONITOR" and "$AMBARI_SERVER" with the appropriate FQDN (hostnames)
... View more
08-29-2017
03:58 PM
@Dominik Ludwig If you have the ambari-agent also running on the ambari-server host then you can extract the informations using the following kind of API call for any host present in the cluster (including ambari server host) # curl -sH "X-Requested-By: ambari" -u admin:admin http://localhost:8080/api/v1/hosts/sandbox.hortonworks.com?fields=Hosts/disk_info,Hosts/rack_info,Hosts/host_name,Hosts/cpu_count,Hosts/ph_cpu_count,Hosts/ip,Hosts/total_mem,Hosts/os_arch,Hosts/os_type . Example Output: # curl -sH "X-Requested-By: ambari" -u admin:admin http://localhost:8080/api/v1/hosts/sandbox.hortonworks.com?fields=Hosts/disk_info,Hosts/rack_info,Hosts/host_name,Hosts/cpu_count,Hosts/ph_cpu_count,Hosts/ip,Hosts/total_mem,Hosts/os_arch,Hosts/os_type
{
"href" : "http://localhost:8080/api/v1/hosts/sandbox.hortonworks.com?fields=Hosts/disk_info,Hosts/rack_info,Hosts/host_name,Hosts/cpu_count,Hosts/ph_cpu_count,Hosts/ip,Hosts/total_mem,Hosts/os_arch,Hosts/os_type",
"Hosts" : {
"cluster_name" : "Sandbox",
"cpu_count" : 4,
"disk_info" : [
{
"available" : "12288100",
"device" : "overlay",
"used" : "30125548",
"percent" : "72%",
"size" : "44707764",
"type" : "overlay",
"mountpoint" : "/"
},
{
"available" : "12288100",
"device" : "/dev/sda3",
"used" : "30125548",
"percent" : "72%",
"size" : "44707764",
"type" : "ext4",
"mountpoint" : "/hadoop"
}
],
"host_name" : "sandbox.hortonworks.com",
"ip" : "172.17.0.2",
"os_arch" : "x86_64",
"os_type" : "centos6",
"ph_cpu_count" : 4,
"rack_info" : "/default-rack",
"total_mem" : 8174640
}
} . Please replace the "http://localhost:8080/api/v1/hosts/sandbox.hortonworks.com" localhost with your ambari server hostname. sandbox.hortonworks.com with any of your host of your ambari cluster which has the ambari agent is installed.
... View more
08-29-2017
09:29 AM
1 Kudo
Many times we see some repeated logging inside our log files. For example in case of ambari-server.log we see the following kind of repeated logging inside the log. WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.ambari.server.api.services.StacksService.getStackArtifacts(java.lang.String,javax.ws.rs.core.HttpHeaders,javax.ws.rs.core.UriInfo,java.lang.String,java.lang.String), should not consume any entity. We might see the above kind of warning messages repeated many times. # grep 'public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RequestService.getRequests' /var/log/ambari-server/ambari-server.log
150
- These are actually harmless WARNING messages, but many times it is desired to make sure that they are not logged, That way we can save some disk space issues and have a clean log. - Every time it is not possible to change the rootLogger to "ERROR" like following to avoid printing some INFO/WARNING messages, Because it will cause suppressing other useful INFO/WARNING messages not t be logged. log4j.rootLogger=ERROR,file - In order to avoid logging of few specific log entries based on the Strings irrespective of the various different logging level (INFO/WARNING/ERROR/DEBUG) those entries are coming from. - In this case suppose, if we do not want to log the line which has "public javax.ws.rs.core.Response" entry in it at any place then we can make use of StringMatchFilter feature of log4j as following: . Step-1). Edit the "/etc/ambari-serevr/conf/log4j.properties" and add the following 3 lines in it Just below to the "file" log appender. log4j.appender.file.filter.01=org.apache.log4j.varia.StringMatchFilter
log4j.appender.file.filter.01.StringToMatch=public javax.ws.rs.core.Response
log4j.appender.file.filter.01.AcceptOnMatch=false Now the log4j.properties audit log appender will look like following: # Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.MaxFileSize=80MB
log4j.appender.file.MaxBackupIndex=60
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
log4j.appender.file.filter.01=org.apache.log4j.varia.StringMatchFilter
log4j.appender.file.filter.01.StringToMatch=public javax.ws.rs.core.Response
log4j.appender.file.filter.01.AcceptOnMatch=false NOTE: we can use as many filters we want. We will only need to change the filter number like "log4j.appender.file.filter.01", "log4j.appender.file.filter.02", "log4j.appender.file.filter.03" with different "StringToMatch" values. Step-2). Move the OLD ambari-server logs and restart the ambari-server # mv /var/log/ambari-server /var/log/ambari-server_OLD
# ambari-server restart . Step-3). Put the ambari-server.log in tail and then restart ambari server to see if the following line entry is gone from the ambari-server.log now and you should not see those lines again. # grep 'public javax.ws.rs.core.Response org.apache.ambari.server.api.services.RequestService.getRequests' /var/log/ambari-server/ambari-server.log .
... View more
Labels: