Member since
10-04-2016
22
Posts
2
Kudos Received
0
Solutions
08-07-2018
08:57 PM
1 Kudo
While upgrading the Ambari server from 2.6.2 to Ambari 2.7.0.0-897 ,the ambari-server upgrade command fails with dependency unavailability. [root@c2140-node4 yum.repos.d]# ambari-server upgrade
Using python /usr/bin/python Upgrading ambari-server
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 35, in <module>
from ambari_commons.os_utils import remove_file
File "/usr/lib/ambari-server/lib/ambari_commons/os_utils.py", line 39, in <module>
from ambari_commons.os_linux import os_change_owner, os_getpass, os_is_root, os_run_os_command, \
File "/usr/lib/ambari-server/lib/ambari_commons/os_linux.py", line 25, in <module>
from ambari_commons import subprocess32
File "/usr/lib/ambari-server/lib/ambari_commons/subprocess32.py", line 146, in <module>
import importlib
ImportError: No module named importlib Solution The simple solution is to install the dependency python-importlib.noarch [root@c2140-node4 yum.repos.d]# yum install python-importlib.noarch -y
Loaded plugins: fastestmirror, ovl
Setting up Install Process
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package python-importlib.noarch 0:1.0.4-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================================
Installing:
python-importlib noarch 1.0.4-1.el6 epel 11 k
Transaction Summary
=================================================================================================================================================================================================
Install 1 Package(s)
... View more
Labels:
04-17-2018
09:35 PM
For security reasons, sometimes we might need to deactivate the ambari users ,rather deleting.In these scenerios make the user inactive from ambari UI, however if the users are more we might need to automate it from rest API as mentioned below. The objective is to deactivate the user "ritesh" . One of method is from UI Go to Admin --> manage users --> users --> click on user name We can see the user is in ACTIVE State. You can use this toggle button to change the value. Deactivate a user from REST API. 1. If you try to curl the rest API link http://172.26.113.155:8080/api/v1/users, it will show you all the users in the ambari. To get the properties for a particular user, append the user name at the end of the API (like below) If we analyze above output, we see an element "active"="true" which controls the status of that user (active/inactive) Hence to disable the user, run this below command. Please notice that the command has argument ("Users/active":"false") for PUT method and has been extracted from above output. You can change this body to change other values like admin. Syntax: curl -iv -u username:password -H "X-Requested-By: ambari" -X PUT -d '{"Users/active":"new_value"}' http://ambariserver:8080/api/v1/users/user_name For example: curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"Users/active":"false"}' http://172.26.113.155:8080/api/v1/users/ritesh Once this command is executed we can see the new updated value in CLI and UI.
... View more
Labels:
09-26-2017
10:20 PM
Usually hive treats \n as newline character and everytime we ingest the data it will split single line into multiple lines depending on no of \n in the text. For eg. Step1: Create a table and try ingesting the data using insert into table command. Step2: List the contents of table. We will observe that single line with '\n' in between is split in two lines. Workaround There are two ways to ingest data containing newline character 1. Using escape character 2. Using load data inpath command Solution1: When we use escape character "\" with "\n", hive will ignore \n as newline but will treat it as string. (abcd\ndefg) querying the table Solution 2: Using "load data inpath" command. Step1: Upload the file on HDFS Step2: Load data from file to table
... View more
Labels:
09-26-2017
04:50 PM
Just checking were you able to find the RCA. Thanks
... View more
09-25-2017
11:14 PM
While running Hcat query, it doesn't source hive-env.sh file but exports hcat-env.sh Add below parameter in hcat-env.sh from Ambari and restart hive services. HIVE_AUX_JARS_PATH=/tmp/testjar where /tmp/testjar is the directory for all custom jars.
... View more
Labels:
09-21-2017
12:01 AM
1 Kudo
This article focuses on setting up a custom ambari alert which monitors Yarn memory utilization by fetching from yarn JMX. Prerequisites - JSON file,access to node CLI, cluster credentials. Steps1: Create a alert.json file and save it on any location on node. There are two files attached (with comments and a template). Note: In the line ""Hadoop:service=ResourceManager,name=QueueMetrics,q0=root/AllocatedMB"" I am fetching the AllocatedMB from root queue. Thats visible in RM JMX. Step2: From the same node run this below command.It will create a new alert in ambari. curl -u user:password -i -H 'X-Requested-By:ambari' -X POST -d @alert.json http://<ambari-server>:8080/api/v1/clusters/<cluster-name>/alert_definitions Step3: Login to ambari and check alert. It will start showing the alert. Attachments:alertjson.txt alertjson-with-comments.txt
... View more
Labels:
08-08-2017
06:13 PM
Many customers are looking to use local indexes with HDP-2.6.1 to avoid some of the issues global secondary indexes like latency spikes in secondary index management codepath that have a way of cascading it to several region servers and grinding the entire cluster to a halt. Now HDP is using new local index implementation starting from 2.5. HDP 2.6.x has a majority of fixes for local indexes that exist in Apache Phoenix 4.8-4.9-4.10. In other words local indexes is ready and can be used in production.
... View more
Labels:
07-07-2017
11:46 PM
It worked for me when used with "identified by 'password' " string in grant all privileges on *.* to 'root'@'oozie-test2' identified by 'password' with grant option;
... View more
05-09-2017
10:41 PM
It happens when master is getting timeout while initializing the namespace table. (As far I know) Increase the value for hbase.master.namespace.init.timeout in hbase-site.xml
... View more
02-17-2017
08:04 PM
Check if you can listen to the mysql port. [root@node1 ~]# lsof -i :3306 #replace the port no as applicable If there is no output that means mysql is running in "skip_networking" mode. Disable the mode by commenting out below line in /etc/my.cnf and restart the db # Don't listen on a TCP/IP port at all. #skip-networking Now check the port again . It should show similar output COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 18730 mysql 10u IPv4 120579417 0t0 TCP *:mysql (LISTEN) mysqld 18730 mysql 29u IPv4 120579424 0t0 TCP localhost:mysql->localhost:41107 (ESTABLISHED) Now try to run sqoop job.
... View more
- « Previous
-
- 1
- 2
- Next »