Member since 
    
	
		
		
		06-07-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                81
            
            
                Posts
            
        
                3
            
            
                Kudos Received
            
        
                5
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1710 | 02-21-2018 07:54 AM | |
| 4782 | 02-21-2018 07:52 AM | |
| 5643 | 02-14-2018 09:30 AM | |
| 2590 | 10-13-2016 04:18 AM | |
| 15531 | 10-11-2016 08:26 AM | 
			
    
	
		
		
		02-15-2018
	
		
		08:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Dear All,  I did a new installation of 2.6.2.0 with one name and 2 data nodes. History server service is not getting started due to the below error. Both datanode services are started and running. Also because of this issue, I'm not able to copy any files to hdfs because datanode is not detected and no information is passed namenode. Looks like network issue between name and data nodes.   "  {    "RemoteException": {    "exception": "IOException",     "javaClassName": "java.io.IOException",     "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"    }  }  "  $ hadoop fs -copyFromLocal /tmp/test/ambari.repo /test/  18/02/15 15:12:37 WARN hdfs.DFSClient: DataStreamer Exception  org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/ambari.repo._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1709)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3337)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261)    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)  $ hdfs dfsadmin -report  Configured Capacity: 0 (0 B)  Present Capacity: 0 (0 B)  DFS Remaining: 0 (0 B)  DFS Used: 0 (0 B)  DFS Used%: NaN%  Under replicated blocks: 0  Blocks with corrupt replicas: 0  Missing blocks: 0  Missing blocks (with replication factor 1): 0  -------------------------------------------------  Solutions I have tried,  1. Verified /etc/hosts file and did dns lookup from edge,name & datanodes to all other nodes and it is resolving properly.  2. Added in hdfs-site.xml the below entries and restarted the services.  dfs.client.use.datanode.hostname=true  dfs.datanode.use.datanode.hostname=true    dfs.namenode.datanode.registration.ip-hostname-check=false  3. 50010 port is open on datanodes  4. 50070 open on namenodes  5. Did a clean reboot of all nodes and services.  Still issue remains the same? On hortonworks links they gave just port numbers. Just want to know what port should be opened on name and data nodes and which node will access that? This environment is on AWS and I would need specify the destination host which access this port for communication.   Appreciate your help. Thank you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		02-14-2018
	
		
		09:30 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi All, I'm closing this thread. Looks like some error with the repository on 2.6.4.0 version. I have cleaned up and installed 2.6.2.0 version and it went fine, but have some error in starting few services like history server and I can fix that with the information in the logs. Thank you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2018
	
		
		09:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Dear All,  I have setup a new HDP cluster with 2.6.2.0 version and few services are not starting due to below errors. This is a new setup.  History Server Error  "  raise WebHDFSCallException(err_msg, result_dict)  resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.2.0-205/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' 'http://ip-172-29-1-250.ap-southeast-1.compute.internal:50070/webhdfs/v1/hdp/apps/2.6.2.0-205/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403.   {    "RemoteException": {    "exception": "IOException",     "javaClassName": "java.io.IOException",     "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"    }  }  "  NOTE: Datanode services are started and running fine, /etc/hosts are fine and hostname -f resolves the correct name.  I tried to run HDFS service check and ended up with the same error.  "  resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/etc/passwd -H 'Content-Type: application/octet-stream' 'http://ip-172-29-1-250.ap-southeast-1.compute.internal:50070/webhdfs/v1/tmp/id1dacfa01_date571418?op=CREATE&user.name=hdfs&overwrite=True'' returned status_code=403.   {    "RemoteException": {    "exception": "IOException",     "javaClassName": "java.io.IOException",     "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null"    }  }  "  
  Ambari Metrics Collector and Resource manager are getting started and randomly coming down in some mins.  Appreciate your help.     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-14-2018
	
		
		02:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Zill Silveira   Repo is fine and manually I'm able to install storm-slider client. You know all packages installed, only it is throwing error on   install,start & Test portion of the ambari console for new cluster build. Also it is new build and first time, hence there is no stack & versions on the console. I have gone through links which tells 2.6.2.0 will work, whether I can directly try that version or need to clean up all packages manually which is pretty tough? Or trying 2.6.2.0 will cleanup and will do the installation? IF no got any steps to cleanup for these issues? Thank you in advance.    $ yum install storm-slider-client  Loaded plugins: amazon-id, rhui-lb, search-disabled-repos  HDP-2.6-GPL-repo-1  | 2.9 kB  00:00:00  HDP-2.6-repo-1  | 2.9 kB  00:00:00  HDP-UTILS-1.1.0.22-repo-1  | 2.9 kB  00:00:00  ambari-2.6.1.0  | 2.9 kB  00:00:00  rhui-REGION-client-config-server-7  | 2.9 kB  00:00:00  rhui-REGION-rhel-server-releases  | 3.5 kB  00:00:00  rhui-REGION-rhel-server-rh-common  | 3.8 kB  00:00:00  Resolving Dependencies  There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).  --> Running transaction check  ---> Package storm-slider-client.noarch 0:1.1.0.2.6.4.0-91 will be installed  --> Processing Dependency: storm_2_6_4_0_91-slider-client for package: storm-slider-client-1.1.0.2.6.4.0-91.noarch  --> Running transaction check  ---> Package storm_2_6_4_0_91-slider-client.x86_64 0:1.1.0.2.6.4.0-91 will be installed  --> Processing Dependency: slider_2_6_4_0_91 for package: storm_2_6_4_0_91-slider-client-1.1.0.2.6.4.0-91.x86_64  --> Running transaction check  ---> Package slider_2_6_4_0_91.noarch 0:0.92.0.2.6.4.0-91 will be installed  --> Finished Dependency Resolution  Dependencies Resolved  ==================================================================================================================   Package  Arch  Version  Repository  Size  ==================================================================================================================  Installing:   storm-slider-client  noarch  1.1.0.2.6.4.0-91  HDP-2.6-repo-1  2.6 k  Installing for dependencies:   slider_2_6_4_0_91  noarch  0.92.0.2.6.4.0-91  HDP-2.6-repo-1  91 M   storm_2_6_4_0_91-slider-client  x86_64  1.1.0.2.6.4.0-91  HDP-2.6-repo-1  135 M  Transaction Summary  ==================================================================================================================  Install  1 Package (+2 Dependent packages)  Total download size: 225 M  Installed size: 249 M  Is this ok [y/d/N]: N  Exiting on user command  Your transaction was saved, rerun it with:   yum load-transaction /tmp/yum_save_tx.2018-02-14.10-10.5Ai0ZC.yumtx  [root@eim-preprod-namenode-1]:/etc/yum.repos.d  $ ls -l  total 44  -rw-r--r--. 1 root root  463 Feb 13 18:08 ambari-hdp-1.repo  -rw-r--r--. 1 root root  306 Feb 13 10:37 ambari.repo  -rw-r--r--. 1 root root  607 Jan 18 18:37 redhat-rhui-client-config.repo  -rw-r--r--. 1 root root  8679 Jan 18 18:37 redhat-rhui.repo  -rw-r--r--. 1 root root  90 Jan 18 18:37 rhui-load-balancers.conf  -rw-r--r--. 1 root root 14656 Jul  4  2014 snappy-devel-1.1.0-3.el7.x86_64.rpm  [root@eim-preprod-namenode-1]:/etc/yum.repos.d  $ cat ambari-hdp-1.repo  [HDP-2.6-repo-1]  name=HDP-2.6-repo-1  baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0  path=/  enabled=1  gpgcheck=0  [HDP-2.6-GPL-repo-1]  name=HDP-2.6-GPL-repo-1  baseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0  path=/  enabled=1  gpgcheck=0  [HDP-UTILS-1.1.0.22-repo-1]  name=HDP-UTILS-1.1.0.22-repo-1  baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7  path=/  enabled=1 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2018
	
		
		12:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Zill Silveira   For the new version 2.6.4.0, it downloads ambari-hdp-1.repo file which got 3 urls for hdp, jenkins key and hdp utils. I have posted more details with the issue in the below link. That got all details. Have a look.  https://community.hortonworks.com/questions/172034/hdp-2640-cluster-creation-is-getting-failed-due-to.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2018
	
		
		09:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Girish Khole  Have you found a solution for this issue? I'm too stuck on the same issue and stuck. Appreciate your reply. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2018
	
		
		09:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Lukas Muller:  I'm too stuck with the same issue while i try to install 2.6.4.0, Actually in backend it does install everything and breaks at the same point like yours. Just want to know how to you revert to 2.6.2.0? I mean did you do a manual clean up or directly tried 2.6.2.0 and it cleaned up and installs the 2.6.2.0 version? If its manual could you provide the steps. Thank you. Appreciate your reply. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2018
	
		
		04:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Dear All, 
 Need your help badly on the below error. I have setup all and trying to install one namenode and 2 datanodes and getting error regarding the repository. Kindly see the below details and help me out. I'm using redhat 7.4 release. This is a new set up and environment is on aws cloud. (Not old nodes) Error log is also attached. 
 Main error " Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match " 
 Other errors 
 " 
 Traceback (most recent call last):  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 89, in <module>  ApplicationTimelineServer().execute()  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute  method(env)  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 38, in install  self.install_packages(env)  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 811, in install_packages  name = self.format_package_name(package['name'])  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 546, in format_package_name  raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))  resource_management.core.exceptions.Fail: Cannot match package for regexp name hadoop_${stack_version}-yarn. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 
 " 
 "2018-02-13 12:08:32,217 - The 'hadoop-hdfs-datanode' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.4.0-91). This is the version that will be reported.  Command aborted. Reason: 'Server considered task failed and automatically aborted it' " 
 $ yum repolist  Loaded plugins: amazon-id, rhui-lb, search-disabled-repos  repo id repo name status  HDP-2.6-GPL-repo-1 HDP-2.6-GPL-repo-1 4  HDP-2.6-repo-1 HDP-2.6-repo-1 232  HDP-UTILS-1.1.0.22-repo-1 HDP-UTILS-1.1.0.22-repo-1 16  ambari-2.6.1.0 ambari Version - ambari-2.6.1.0 12  rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Ser 1  rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 18,035  rhui-REGION-rhel-server-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (RPMs) 231  repolist: 18,531 
 $ cat ambari.repo  #VERSION_NUMBER=2.6.1.0-143  [ambari-2.6.1.0]  name=ambari Version - ambari-2.6.1.0  baseurl=http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.1.0  gpgcheck=1  gpgkey=http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.1.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins  enabled=1  priority=1 
 $ cat ambari-hdp-1.repo  [HDP-2.6-repo-1]  name=HDP-2.6-repo-1  baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0 
 path=/  enabled=1  gpgcheck=0  [HDP-2.6-GPL-repo-1]  name=HDP-2.6-GPL-repo-1  baseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0 
 path=/  enabled=1  gpgcheck=0  [HDP-UTILS-1.1.0.22-repo-1]  name=HDP-UTILS-1.1.0.22-repo-1  baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7 
 path=/  enabled=1  gpgcheck=0[root@eim-preprod-namenode-1]:/etc/yum.repos.d 
    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		01-17-2018
	
		
		10:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Geoffrey Shelton Okot  Thank you very much for the information.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-16-2018
	
		
		09:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Geoffrey Shelton Okot   I think HDP 2.4 is not downloadable from hortonworks site? Because we will be setting up new environment in which we will install the latest version and only latest one is downloadable. May be there will be someother link for 2.4. Even I think it might be a bug on the version. There is no hint found for this error apart from the regular steps you have provided. Below are details you have asked for.  Ambari Server    $ rpm -qa | grep -i ambari  ambari-server-2.2.1.0-161.x86_64    $ rpm -qa | grep -i hadoop  hadoop_2_4_0_0_169-mapreduce-2.7.1.2.4.0.0-169.el6.x86_64  hadoop_2_4_0_0_169-yarn-2.7.1.2.4.0.0-169.el6.x86_64  hadoop_2_4_0_0_169-libhdfs-2.7.1.2.4.0.0-169.el6.x86_64  hadoop_2_4_0_0_169-2.7.1.2.4.0.0-169.el6.x86_64  hadoop_2_4_0_0_169-hdfs-2.7.1.2.4.0.0-169.el6.x86_64    $ rpm -qa | grep -i ambari  ambari-metrics-monitor-2.2.1.0-161.x86_64  ambari-metrics-collector-2.2.1.0-161.x86_64  ambari-agent-2.2.1.0-161.x86_64  ambari-metrics-hadoop-sink-2.2.1.0-161.x86_64    $ rpm -qa | grep -i hdp  hdp-select-2.4.0.0-169.el6.noarch 
						
					
					... View more