Member since 
    
	
		
		
		05-20-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                155
            
            
                Posts
            
        
                220
            
            
                Kudos Received
            
        
                30
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 7209 | 03-23-2018 04:54 AM | |
| 2643 | 10-05-2017 02:34 PM | |
| 1477 | 10-03-2017 02:02 PM | |
| 8395 | 08-23-2017 06:33 AM | |
| 3220 | 07-27-2017 10:20 AM | 
			
    
	
		
		
		10-06-2018
	
		
		03:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		7 Kudos
		
	
				
		
	
		
					
							 Starting with HDP 3.0 release, Hortonworks has stopped CentOS 6 as a supported platform. This would mean existing cluster nodes needs to be migrated to CentOS 7 before install/upgrade to HDP 3.0. However, CentOS has no official support for upgrading OS from CentOS 6 to CentOS 7. This article provides an unofficial guide for migrating cluster nodes to CentOS 7 without losing data. This is downtime migration. We would strongly recommend trying these steps on test clusters. These steps are applicable only for a smaller cluster ( <= 10 nodes ) and larger nodes we would need different strategy + automation for the same.  Below are some of the assumptions or pre-requisite to be performed before upgrading the OS.  Pre-Requisite   Visit all service configuration for e.x. HDFS, YARN etc make sure none of the configurations are pointing to non-root disk. If the services are pointing to root disk then upgrading the OS will format the root disk and we will lose data. Please make sure to migrate all configuration pointing to root disk to non-root disk path before upgrading the OS. For .e.x. below service configs.   HDFS Data Node Directory ( dfs.datanode.data.dir) is pointing to non-root disk ( i.e. on mounted disk other than where OS is installed )
HDFS Name Node Directory (dfs.namenode.name.dir) is pointing to non-root disk
Yarn Local Directories config (yarn.nodemanager.local-dirs) is pointing to non-root disk.
Zookeeper dataDir is pointing to non-root disk.
Oozie Data Directory is pointing    Make sure none of the databases for services like Hive, Oozie, Ranger, and Ambari are within the cluster. If they are part of the clusters being migrated, please migrates the DBs to external hosts before upgrading the OS.  Take a backup of cluster blueprint. Please refer to this link on exporting cluster  blueprint  Backup all required data in the cluster root disk on all nodes.   Migrating Slaves Nodes  Pick a slave node in the cluster for upgrading the OS. Please note each slave nods needs to be upgraded one at a time. Please do not attempt upgrading multiple slave node in a single go. Follow below steps to upgrade the slave node from CentOS 6 to CentOS 7.   If the selected slave node has DataNode installed, please Decommission the DataNode. Please refer to this link for decommissioning.  Stop all services on the node from Ambari.  ssh to the stop ambari-agent on the node using below command.  ambari-agent stop      Double check that required files on root disk is backed up. Now upgrade the OS to CentOS 7. Steps to upgrade the OS is out of scope for this article.  Once the OS is upgraded, make sure hostname and IP address are same as before the upgrade.  Update /etc/hosts with all the cluster nodes IP address and hostname.  Disable firewall.  Add the required ambari repo to /etc/yum.repos.d and install ambari agent using below command  yum install ambari-agent    Install required JDK version pointing to the path configured in ambari.properties  Reset the certificates and configure ambari-agent to connect to ambari-server using below command.  ambari-agent reset <ambari-server-host-name>    ambari-agent had issues with connecting ambari-server when we were performing and below steps helped us in solving the problem.  Under /var/lib/ambari-server/keys on ambari-server node, move the existing csr and crt files for the upgraded host being registered.  Add below entry under [security] section of /etc/ambari-agent/conf/ambari-agent.ini  force_https_protocol=PROTOCOL_TLSv1_2       Now start the ambari-agent using below command.  ambari-agent start    Once the host is ambari-agent is up and running, goto the Ambari and for the upgraded host run "Recover Host" option from the Host Actions.  Recover Host reinstalls all services on the upgraded node with the required yum package.    Once the recover host is complete, recommission the DataNode.  Start all the services on the host.      Migrating Master Nodes
 Upgrading the Master Nodes to CentOS 7 is similar to that of slaves node. Since this article covers upgrade with downtime, inline upgrading is out of scope. In case we are performing inline upgrade we would need to have HA for all master and move the components to different hosts before upgrading the same, however, this is out of scope from the article.        Migrating Ambari Node  Run below pre-requisite before upgrading the Ambari Node.   Backup the key directories in ambari by running below command.  ambari-server backup    Backup the ambari.properties under /etc/ambari-server/conf  Back up the ambari-server DB if it is embedded postgres DB using below command  pg_dump -U {ambari.db.username} -f ambari.sql    Preferably restore the DB on an external host by running below command  psql -d ambari -f /tmp/dbdumps/ambari.sql     Post the pre-requisite is complete follow below steps for the actual upgrade.   Upgrade the OS from CentOS 6 to CentOS 7 after taking required backup on the root disk.  Update /etc/hosts with all the cluster nodes IP address and hostname.  Disable firewall.  Configure ambari repo under /etc/yum.repos.d  Install ambari-server using below command  yum install ambari-server  yum install ambari-agent    Install required JDK version pointing to the path configured in ambari.properties  Reset the certificates and configure ambari-agent to connect to ambari-server using below command.  ambari-agent reset <ambari-server-host-name>    Run below setup command to point to the external DB. Also, set up similar to how the original ambari server is configured.  ambari-server setup    Start ambari-agent using below command  ambari-agent start    Start ambari-server using below command  ambari-server start     Migrating MIT KDC   If the cluster is configured with MIT KDC and is installed within the cluster, follow below steps to backup and restore kerberos database. Please note kdc needs to be installed on the same host where it was installed before the upgrade.  prerequisite   Backup the keytab from the HDP cluster under /etc/security/keytabs from all nodes.  Note down your kdc admin principal and password  Backup /etc/krb5.conf  Backup /var/kerberos directory Backup   backup   Take the kerberos database dump using below command ( to be executed on the node running Kerberos )  kdb5_util dump kdb5_dump.txt    Safely backup the kdb5_dump.txt.   restore   Restore the kerberos database execute below command  kdb5_util load kdb5_dump.txt    Restore the /etc/krb5.conf from backup.  Restore /var/kerberos/krb5kdc/kdc.conf from backup.  Restore /var/kerberos/krb5kdc/kadm5.acl from backup.  Run below command to store master principal in stash file ( kdc admin password is required )  kdb5_util stash    Start KDC server using below command  service krb5kdc start    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		08-24-2018
	
		
		09:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 There are at times we would need to move kerberos database to different nodes or upgrade the OS of KDC node ( for e.x CentOS6 to CentOS7 ). Obviously you would not want to lose you the kdc users especially if your HDP cluster is configured to use this kdc.  Follow below steps to backup and restore kerberos database.  prerequisite  * Backup the keytab from the HDP cluster under /etc/security/keytabs from all nodes.
* Note down your kdc admin principal and password
* Backup /etc/krb5.conf
* Backup  /var/kerberos directory  Backup  * Take the kerberos database dump using below command ( to be executed on node running kerberos )
kdb5_util dump kdb5_dump.txt
* Safely backup the kdb5_dump.txt.  Restore  * Restore the kerberos database execute below command
kdb5_util load kdb5_dump.txt 
* Restore the /etc/krb5.conf from backup
* Restore /var/kerberos/krb5kdc/kdc.conf from backup
* Restore /var/kerberos/krb5kdc/kadm5.acl from backup
* Run below command to store master principal in stash file ( kdc admin password is required )
kdb5_util stash
* Start KDC server using below command
service krb5kdc start
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-23-2018
	
		
		04:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 ok adding as below in bootstrap conf helped  java.arg.18=-Dcom.sun.management.jmxremote.local.only=true
java.arg.19=-Dcom.sun.management.jmxremote 
java.arg.20=-Dcom.sun.management.jmxremote.authenticate=false
 java.arg.21=-Dcom.sun.management.jmxremote.ssl=false
 java.arg.22=-Dcom.sun.management.jmxremote.port=30008 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-22-2018
	
		
		12:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Hello,  We are trying to enable jmx for Nifi and added below config to bootstrap conf of nifi  -Dcom.sun.management.jmxremote.local.only=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=30008
  I do see that Nifi process has started with these VM parameters, however, the JMX port is not being listened.  Am I missing anything ? This works for any other java process. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		12-01-2017
	
		
		07:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 If you looking to upload template using NIFI REST API, below command curl command upload the template as multipart form post  curl -XPOST -H "Authorization: Bearer {{ token }}" https://localhost:9091/nifi-api/process-groups/10c263d0-0160-1000-0000-00006a157654/templates/upload -k -v -F template=@NifiTemplate.xml 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-05-2017
	
		
		02:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 @nyakkanti
  you can make use of oozie java action or oozie shell action to make the REST API calls.  https://oozie.apache.org/docs/3.2.0-incubating/WorkflowFunctionalSpec.html#a3.2.7_Java_Action  https://oozie.apache.org/docs/3.3.0/DG_ShellActionExtension.html  below is a sample shell action calling python code  <action name="metrics">  <shell xmlns="uri:oozie:shell-action:0.2">   <job-tracker>${RESOURCE_MANAGER}</job-tracker>   <name-node>${NAME_NODE}</name-node>   <exec>oozie_hook.py</exec>  <argument>host</argument>   <argument>port</argument>   <file>${APP_PATH}/shell/oozie_hook.py#oozie_hook.py</file>  </shell>  <ok to="end"/>  <error to="kill"/> </action>
  below is a sample python code ( not working for reference )     url = "http://" + sys.argv[1] + ": " + sys.argv[2] + "/workflow/"
   r = requests.get(url,timeout=60) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-03-2017
	
		
		02:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Can you please share the logs of one the alerts prompted in Ambari UI ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-23-2017
	
		
		06:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 Can you please check the config "hbase.rootdir". Looks like this config is pointing to NameNode which is in StandBy Node.  Try changing this to point to Active NameNode or change it to value of your config fs.defaultFS in core-site.xml and then the hdfs path. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2017
	
		
		11:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Please accept the answer, if it has solved your question 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2017
	
		
		11:02 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 try below command  curl -XGET -u admin:admin http://10.90.3.101:8080/api/v1/clusters/hdp_cluster/components
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













