Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2287 | 12-06-2018 12:25 PM | |
2341 | 11-27-2018 06:00 PM | |
1814 | 11-22-2018 03:42 PM | |
2881 | 11-20-2018 02:00 PM | |
5230 | 11-19-2018 03:24 PM |
03-27-2018
04:14 PM
@Michael Bronson, My bad. A space was missing between header and -X put. Use this curl -u $USER:$PASSWD -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.AMBARI_METRICS","operation_level":{"level":"SERVICE","cluster_name":"hdp","service_name":"AMBARI_METRICS"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://localhost:8080/api/v1/clusters/hdp/services/AMBARI_METRICS
... View more
03-27-2018
03:46 PM
1 Kudo
@Michael Bronson, Try this curl -u $USER:$PASSWD -i -H 'X-Requested-By: ambari'-X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.AMBARI_METRICS","operation_level":{"level":"SERVICE","cluster_name":"hdp","service_name":"AMBARI_METRICS"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://localhost:8080/api/v1/clusters/hdp/services/AMBARI_METRICS Make sure that stop operation is not failed. You can check the operations in ambari UI. . Thanks, Aditya
... View more
03-27-2018
02:05 PM
@Michael Bronson, It is not supported by the script by default. But you can add a line to the script and make it work. Assume you want to store output in /tmp/myconfigs mkdir /tmp/myconfigs chmod 777 /tmp/myconfigs Modify the script as below. Find the function 'output_to_file' and the below line filename = os.path.join('/tmp','myconfigs',filename) The function should look like below. Make sure indentation is proper. Python has strict checking for indentation def output_to_file(filename):
filename = os.path.join('/tmp','myconfigs',filename)
def output(config):
with open(filename, 'w') as out_file:
json.dump(config, out_file, indent=2)
return output
... View more
03-27-2018
01:22 PM
@Michael Bronson, There is no such flag with which the file creation can be disabled. If you want to write to a existing valid file, you can pass the file name using -f option. . -Aditya
... View more
03-27-2018
08:38 AM
5 Kudos
In this article we will see how to produce messages using a simple python script and consume messages using ConsumeMQTT processor and put them in HDFS using PutHDFS Note: I'm using CentOS 7 and HDP 2.6.3 for this article . 1) Install MQTT sudo yum -y install epel-release
sudo yum -y install mosquitto . 2) Start MQTT sudo systemctl start mosquitto
sudo systemctl enable mosquitto . 3) Install paho-mqtt python library yum install python-pip
pip install paho-mqtt . 4) Configure MQTT password for the user. I have created a sample user 'aditya' and set the password to 'test' [root@test-instance-4 ~]# useradd aditya
[root@test-instance-4 ~]# sudo mosquitto_passwd -c /etc/mosquitto/passwd aditya
Password:
Reenter password: . 5) Disable anonymous login to MQTT Open the file (/etc/mosquitto/mosquitto.conf ) and add the below entries and restart mosquitto allow_anonymous false
password_file /etc/mosquitto/passwd sudo systemctl restart mosquitto . 6) Design the NiFi flow to consume messages and put into hdfs Configure MQTT processor: Right Click on ConsumeMQTT -> Configure -> Properties. Set Broker URI, Client Id, username, password, Topic filter and Max Queue Size Configure PutHDFS processor: Set Hadoop Configuration resources and Directory( to store messages) . 7) Create a sample python script to publish messages. Use mqttpublish.txt attached and rename it to MQTTPublish.py to publish messages . 😎 Run the Nifi flow. . 9) Run the python script attached. python MQTTPublish.py . 10) Check the directory to check if the messages are put in HDFS hdfs dfs -ls /user/aditya/
hdfs dfs -cat /user/aditya/* . Hope this helps 🙂 mqttpublish.txt
... View more
Labels:
03-27-2018
07:36 AM
@Anurag Mishra, 1) You can run the below command oozie job -oozie http://<host>:11000/oozie/ -log {oozieJobId} 2) Yes. The logs will also be saved under /var/log/oozie directory. . -Aditya
... View more
03-22-2018
01:46 PM
@heta desai, 1) You can appear for this exam through your company ID. Once you clear the exam you will get an email to your company ID which will have a digital badge which you can add to your LinkedIn profile. You can save this link. This will be valid even if you leave the company 2) There is no expiration for this certificate. 3) There is no requirement of HCA before writing HDPCD Please accept the answer if this helps 🙂 . -Aditya
... View more
03-21-2018
05:37 PM
@Michael Bronson, I think it is a good idea to check that no component is in maintenance mode. Ambari recommends that all the services must be up and running before upgrade and all the service checks should pass before the upgrade.
... View more
03-21-2018
05:09 PM
1 Kudo
@Michael Bronson, You can use this API to check maintenance mode of components in a particular host To get maintenance_state of all components curl -u admin:admin -H "X-Requested-By: ambari"-X GET http://localhost:8080/api/v1/clusters/sys76/hosts/{hostname}/host_components?fields=HostRoles/maintenance_state To get components which have maintenance mode ON curl -u admin:admin -H "X-Requested-By: ambari"-X GET http://localhost:8080/api/v1/clusters/sys76/hosts/{hostname}/host_components?HostRoles/maintenance_state=ON To get components which have maintenance mode OFF curl -u admin:admin -H "X-Requested-By: ambari"-X GET http://localhost:8080/api/v1/clusters/sys76/hosts/{hostname}/host_components?HostRoles/maintenance_state=OFF Note: Replace the hostname with the original hostname in the curl call . -Aditya
... View more
03-20-2018
05:25 PM
@Sedat Kestepe, You can add it from Ambari. Go to Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site and add the key (dfs.namenode.fs-limits.max-directory-items) If you set it to 0, the check will be disabled. Ambari will take care of pushing the config to all the nodes on restart. . -Aditya
... View more