Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2225 | 12-06-2018 12:25 PM | |
2281 | 11-27-2018 06:00 PM | |
1777 | 11-22-2018 03:42 PM | |
2837 | 11-20-2018 02:00 PM | |
5142 | 11-19-2018 03:24 PM |
01-10-2018
09:35 AM
@Amogh Suman Can you please share the Screenshot of "Hive View 2.0" where it is hanging. Also please share the "HiveServer2.log" And the View logs which you can find inside # less /var/log/hive/hiveserver2.log
# less /var/log/ambari-server/hive-next-view/hive-view.log
# less /var/log/ambari-server/hive20-view/hive20-view.log
. Also please run the "HiveService Check" once to see if everything is OK from hive side. If needed then please restart the "Hive" Service.
... View more
01-05-2018
03:11 PM
@Jay Kumar SenSharma Regarding reinstalling HBase service. How can I remove the HBase Service from Ambari UI? and How can I reinstall it surely ? Thanks
... View more
01-04-2018
06:31 PM
1 Kudo
@Michael Bronson, You can use these curl calls to run all the service checks and check the status To run service checks curl -ivk -H "X-Requested-By: ambari" -u {ambari-username}:{ambari-password} -X POST -d @payload.txt http://{ambari-server}:{ambari-port}/api/v1/clusters/{cluster-name}/request_schedules
Sample response:
{
"resources": [
{
"href": "http://<ambari-server>:8080/api/v1/clusters/<clustername>/request_schedules/68",
"RequestSchedule": {
"id": 68 // This is the request-schedule-id to be used for second call
}
}
]
}
<br> Note: Download the attached payload.txt to some folder and run the above command from the same folder. To get status of service checks curl -ivk -H "X-Requested-By: ambari" -u {ambari-username}:{ambari-password} -X GET http://{ambari-server}:{ambari-port}/api/v1/clusters/{cluster-name}/request_schedules/{request-schedule-id} To get the status of each service, iterate through batch_requests array in the response and look for 'request_status' inside each object. COMPLETED is for passed, FAILED for failed, ABORTED if service check is aborted.payload.txt Note: request-schedule-id for the second curl call is obtained from the response of 1st call. Thanks, Aditya
... View more
05-16-2018
06:48 PM
This can happen if spark1 and spark2 are both running on same node.Try to kill the process. Then delete the service and add it to a separate node.It must work.
... View more
06-24-2018
07:56 PM
@Dassi Jean Fongang Unfortunately there is no FORCE command for decommissioning in Hadoop. Once you have the host in the excludes file and you run the yarn rmadmin -refreshNodes command that should trigger the decommissioning. It isn't recommended and good architecture to have a NameNode and DataNode on the same host (Master and Slave/worker respectively) with over 24 nodes you should have planned 3 to 5 master nodes and strictly have DataNode,NodeManager and eg Zk client on the slave (workernodes). Moving the NameNode to a new node and running the decommissioning will make your work easier and isolate your Master processes from the Slave this is the ONLY solution I see left for you. HTH
... View more
12-15-2017
06:01 PM
I tried your suggestion and it seems to work when the transport mode is binary but it does not when transport mode is http. Is this a bug need to be reported? or there is a different config for the http transport mode?
... View more
12-14-2017
12:36 PM
2 Kudos
I tried this and it returns only the queried table curl -X GET \
'http://sandbox.hortonworks.com:21000/api/atlas/v2/search/dsl?typeName=hive_table&query=where%20name%3D%22asteroids%22' \
-H 'authorization: Basic YWRtaW46YWRtaW4=' Since the thread was long. I put the correct answer separately. You will find an "Accept" button beside this answer. Please click on it to accept it to make it as Best answer. Thanks a lot.
... View more
12-05-2017
08:21 PM
1 Kudo
@Michael Bronson You can try the following approach to completely remove the unwanted host from your ambari Database. 0. Stop ambari-server. # ambari-server stop 1. Please take a fresh ambari DB dump for safety and backup. 2. Now run the following SQL queries inside your ambari DB to delete the unwanted host. Please replace the "unwanted1.host.com" with your unwanted hostname and similarly the "351" with the "host_id" that you want to remove from your Database. delete from execution_command where task_id in (select task_id from host_role_command where host_id in (351));
delete from host_version where host_id in (351);
delete from host_role_command where host_id in (351);
delete from serviceconfighosts where host_id in (351);
delete from hoststate where host_id in (351);
delete from kerberos_principal_host WHERE host_id='unwanted1.host.com'; ----> For kerberized Env
delete from hosts where host_name in ('unwanted1.host.com');
delete from alert_current where history_id in ( select alert_id from alert_history where host_name in ('unwanted1.host.com')); 3. Now restart ambari-server. # ambari-server start .
... View more
12-05-2017
02:32 PM
1 Kudo
@Aditya Sirna I found the error... By default "hbase_regionserver_heapsize" was set to 4096m, greater than my server, therefore, regionservers were not able to start. I Changed that value to 1024 and everything went ok! "hbase_regionserver_heapsize" : "4096m",
"hbase_regionserver_heapsize" : "1024",
... View more
11-16-2018
06:36 AM
Use vi /etc/ambari-server/conf/ambari.properties , add the entry, then esc, then :, then wq for save and quite or q! for without saving , use vi to edit and wq to save and quite or q! to exit without saving
... View more