Member since 
    
	
		
		
		05-30-2018
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                1322
            
            
                Posts
            
        
                715
            
            
                Kudos Received
            
        
                148
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4005 | 08-20-2018 08:26 PM | |
| 1880 | 08-15-2018 01:59 PM | |
| 2336 | 08-13-2018 02:20 PM | |
| 4060 | 07-23-2018 04:37 PM | |
| 4951 | 07-19-2018 12:52 PM | 
			
    
	
		
		
		01-08-2016
	
		
		06:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 I am finding many libraries part of HDP base install are added to class path like jackson-core-2.2.3.jar.  However these libraries do not come with the vanilla (non hdp) install.  Does anyone know why and how these libraries are used?  If new version of the library exist, HDP may force to specify classpath for each application running with different jar.  Is there a possible work around where HDP separate core hadoop libraries classpth from non core libraries (like jackson..*)? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
- 
						
							
		
			Apache Hadoop
- 
						
							
		
			Apache YARN
			
    
	
		
		
		01-08-2016
	
		
		03:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Which file stores ambari configurations such as ambari classpath? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ambari
			
    
	
		
		
		01-06-2016
	
		
		03:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 thank you for the help.  I here are the steps I performed on my sandbox to fix the issue  Added to /etc/security/limits:  *  hard  nofile  50000   *  soft  nofile  50000  *  hard  nproc  10000   *  soft  nproc  10000  add to /etc/security/limits.d/90-nproc.conf  *  soft  nproc  10000  Added to /etc/sysctl.conf:
fs.file-max = 50000
Then re-read the sysctl.conf:
/sbin/sysctl -p  Shut down ALL services through ambari  reboot centos  root user -  ulimit -a  And done.  All works. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-06-2016
	
		
		01:30 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks @Andrew Grande but after making the adjustments per quickstart on the sandbox/vm it is still producing same error.   
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-05-2016
	
		
		11:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I am running a csv file with approx 300,000 records through routetext processor.  I am getting the following error to many files open:  NiFi App Log:  2016-01-05 23:53:57,540 WARN [Timer-Driven Process Thread-10] o.a.n.c.t.ContinuallyRunProcessorTask
org.apache.nifi.processor.exception.FlowFileAccessException: Exception in callback: java.io.FileNotFoundException: /opt/nifi-1.1.0.0-10/content_repository/100/1452038037470-66660 (Too many open files)
        at org.apache.nifi.controller.repository.StandardProcessSession.append(StandardProcessSession.java:2048) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.processors.standard.RouteText.appendLine(RouteText.java:499) ~[na:na]
        at org.apache.nifi.processors.standard.RouteText.access$100(RouteText.java:79) ~[na:na]
        at org.apache.nifi.processors.standard.RouteText$1.process(RouteText.java:433) ~[na:na]
        at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1806) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1777) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.processors.standard.RouteText.onTrigger(RouteText.java:360) ~[na:na]
        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139) [nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49) [nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119) [nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_91]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_91]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_91]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7.0_91]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_91]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_91]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_91]
Caused by: java.io.FileNotFoundException: /opt/nifi-1.1.0.0-10/content_repository/100/1452038037470-66660 (Too many open files)
        at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_91]
        at java.io.FileOutputStream.<init>(FileOutputStream.java:221) ~[na:1.7.0_91]
        at org.apache.nifi.controller.repository.FileSystemRepository.write(FileSystemRepository.java:862) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.repository.FileSystemRepository.write(FileSystemRepository.java:831) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        at org.apache.nifi.controller.repository.StandardProcessSession.append(StandardProcessSession.java:2008) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10]
        ... 18 common frames omitted  I have run the following  hadoop dfsadmin -report and all is fine  I have checked    ulimit -Sn    ulimit -Hn   which both have 10000 limit. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache NiFi
			
    
	
		
		
		01-05-2016
	
		
		04:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Tim Hall   Any best practices when Migrating from Fair Scheduler to Capacity Scheduler? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache YARN
			
    
	
		
		
		01-05-2016
	
		
		04:39 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Tim Hall  Is there any feature or functionality with capacity scheduler where it would provide resources to higher priority job which kicked off during lower priority large job which is comsuming most of the resources.  Assuming same org and que. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache YARN
			
    
	
		
		
		01-04-2016
	
		
		05:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Pradeep Ravi  Have you tried running the script via beeline or hive command line?  Have you checked if hive server, hive server 2, and Atlas are up and running?  Maybe it would be easier to ask what services are not running on your sandbox..  If you are able to run the scripts (if long script use the -e option) via command line we can then possibly isolate this issue to ambari view. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-01-2016
	
		
		02:09 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Good call Vladimir.    mkdir: Permission denied: user=yarn, access=WRITE, inode="/user/ambari-qa/falcon/demo/primary/input/enron/2015-12-30-01":ambari-qa:hdfs:drwxr-xr-x
  I executed the job from falcon using ambari-qa.  Is there any configuration I can change so it uses the user ambari-qa during execution? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-31-2015
	
		
		06:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Following the tutorial http://hortonworks.com/hadoop-tutorial/processing-data-pipeline-with-apache-falcon/ during rawEmailIngestProcess shell-wf the shellnode process fails.  shellnode is basically calling ingest.sh:  curl -sS http://bailando.sims.berkeley.edu/enron/enron_wit... | tar xz && hadoop fs -mkdir -p $1 && hadoop fs -put enron_with_categories/*/*.txt $1  I changed ingest.sh and perform a simple hadoop fs -ls and the shellnode process succeeds.    Here is the oozie log:  2015-12-31 05:50:49,213  WARN ShellActionExecutor:523 - SERVER[sandbox.hortonworks.com] USER[ambari-qa] GROUP[-] TOKEN[] APP[shell-wf] JOB[0000420-151222214138596-oozie-oozi-W] ACTION[0000420-151222214138596-oozie-oozi-W@shell-node] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
2015-12-31 05:50:49,239  INFO ActionEndXCommand:520 - SERVER[sandbox.hortonworks.com] USER[ambari-qa] GROUP[-] TOKEN[] APP[shell-wf] JOB[0000420-151222214138596-oozie-oozi-W] ACTION[0000420-151222214138596-oozie-oozi-W@shell-node] ERROR is considered as FAILED for SLA
2015-12-31 05:50:49,261  INFO ActionStartXCommand:520 - SERVER[sandbox.hortonworks.com] USER[ambari-qa] GROUP[-] TOKEN[] APP[shell-wf] JOB[0000420-151222214138596-oozie-oozi-W] ACTION[0000420-151222214138596-oozie-oozi-W@fail] Start action [0000420-151222214138596-oozie-oozi-W@fail] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]  Any help would be appreciated.  @Anderw Ahn  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Falcon
- « Previous
- Next »
 
         
					
				













