Member since 
    
	
		
		
		10-04-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                243
            
            
                Posts
            
        
                281
            
            
                Kudos Received
            
        
                43
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1502 | 01-16-2018 03:38 PM | |
| 6973 | 11-13-2017 05:45 PM | |
| 3918 | 11-13-2017 12:30 AM | |
| 1907 | 10-27-2017 03:58 AM | |
| 29414 | 10-19-2017 03:17 AM | 
			
    
	
		
		
		02-15-2019
	
		
		07:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 HDP-2.6.5, RHEL 7, Kerberized, Ambari-2.6 upgraded to Ambari-2.7.3  Oozie Service Check fails:  STDERR  Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py", line 139, in <module>
    OozieServiceCheck().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py", line 52, in service_check
    OozieServiceCheckDefault.oozie_smoke_shell_file(smoke_test_file_name, prepare_hdfs_file_name)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/service_check.py", line 70, in oozie_smoke_shell_file
    raise Fail(format(NO_DOCS_FOLDER_MESSAGE)) 
NameError: global name 'Fail' is not defined  STDOUT  2019-02-15 19:02:10,245 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.74-2 -> 2.6.5.74-2
2019-02-15 19:02:10,250 - Using hadoop conf dir: /usr/hdp/2.6.5.74-2/hadoop/conf
2019-02-15 19:02:10,275 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://ambariServer.com:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2019-02-15 19:02:10,277 - Not downloading the file from http://ambariServer:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2019-02-15 19:02:11,274 - File['/var/lib/ambari-agent/tmp/oozieSmoke2.sh'] {'content': StaticFile('oozieSmoke2.sh'), 'mode': 0755}
2019-02-15 19:02:11,277 - File['/var/lib/ambari-agent/tmp/prepareOozieHdfsDirectories.sh'] {'content': StaticFile('prepareOozieHdfsDirectories.sh'), 'mode': 0755}
Command failed after 1 tries  I have noticed that hosts with Oozie Server and Clients do not have the following directories:  
 /usr/hdp/current/oozie-client/doc/  /usr/hdp/current/oozie-server/doc/   So, I have tried removing tsflags=nodocs from /etc/yum.conf and attempted reinstalling the oozie client, however, it still does not create the above folders. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Ambari
 - 
						
							
		
			Apache Oozie
 
			
    
	
		
		
		09-24-2018
	
		
		08:44 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @kkanchu Thanks for pointing out! Updated the article. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-05-2018
	
		
		09:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 If you have started using Hive LLAP, you would have noticed that by default its configured to use log4j2.  Default configuration makes use of advanced features from log4j2 like Rolling Over logs based on time interval and size.  With time, a lot of old log files would have accumulated and typically you would compress those files manually or add additional jars and change configuration when using log4j1 to achieve the same  With log4j2, a simple change in configuration can ensure that every time a log file is rolled over, it gets compressed for optimal use of storage space.  Default configuration:      To automatically compress the rolled over log files, update the highlighted line to:  appender.DRFA.filePattern = ${sys:hive.log.dir}/${sys:hive.log.file}.%d{yyyy-MM-dd}-%i.gz  -%i will ensure that in a rare scenario when there has been increased logging and the threshold size can be been reached more than once in the specified interval, the previously rolled over file won't get over written.  .gz will ensure that files are compressed using gzip  To understand the finer details about log4j2 appenders, you may check out the official documentation.  Similarly you can also make similar changes to llap-cli log settings:     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		05-11-2018
	
		
		04:45 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	I am trying to compile and build hadoop source on MacOS.  
	Here is the complete error trace.  [exec] Scanning dependencies of target rpc_obj
     [exec] [ 32%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/rpc_connection_impl.cc.o
     [exec] [ 32%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/rpc_engine.cc.o
     [exec] [ 32%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/namenode_tracker.cc.o
     [exec] [ 33%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/request.cc.o
     [exec] [ 33%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/sasl_protocol.cc.o
     [exec] [ 34%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/sasl_engine.cc.o
     [exec] [ 34%] Building CXX object main/native/libhdfspp/lib/rpc/CMakeFiles/rpc_obj.dir/cyrus_sasl_engine.cc.o
     [exec] [ 34%] Built target rpc_obj
     [exec] Scanning dependencies of target rpc
     [exec] [ 35%] Linking CXX static library librpc.a
     [exec] [ 35%] Built target rpc
     [exec] [ 35%] Building CXX object main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/ioservice_impl.cc.o
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:30: error: no type named 'function' in namespace 'std'
     [exec]   virtual void PostTask(std::function<void(void)> asyncTask) = 0;
     [exec]                         ~~~~~^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:38: error: expected ')'
     [exec]   virtual void PostTask(std::function<void(void)> asyncTask) = 0;
     [exec]                                      ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:24: note: to match this '('
     [exec]   virtual void PostTask(std::function<void(void)> asyncTask) = 0;
     [exec]                        ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:117:10: error: no member named 'function' in namespace 'std'
     [exec]     std::function<void(void)> typeEraser = func;
     [exec]     ~~~~~^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:117:28: error: expected '(' for function-style cast or type construction
     [exec]     std::function<void(void)> typeEraser = func;
     [exec]                        ~~~~^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:117:31: error: use of undeclared identifier 'typeEraser'
     [exec]     std::function<void(void)> typeEraser = func;
     [exec]                               ^
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:45:54: error: non-virtual member function marked 'override' hides virtual member function
     [exec]   void PostTask(std::function<void(void)> asyncTask) override;
     [exec]                                                      ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:16: note: hidden overloaded virtual function 'hdfs::IoService::PostTask' declared here: type mismatch at 1st parameter ('int' vs 'std::function<void ()>')
     [exec]   virtual void PostTask(std::function<void(void)> asyncTask) = 0;
     [exec]                ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:34:14: error: allocating an object of abstract class type 'hdfs::IoServiceImpl'
     [exec]   return new IoServiceImpl();
     [exec]              ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:109:16: note: unimplemented pure virtual method 'PostTask' in 'IoServiceImpl'
     [exec]   virtual void PostTask(std::function<void(void)> asyncTask) = 0;
     [exec]                ^
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:61:
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:2143:9: error: field type 'hdfs::IoServiceImpl' is an abstract class
     [exec]     _T2 __second_;
     [exec]         ^
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:2302:15: note: in instantiation of template class 'std::__1::__libcpp_compressed_pair_imp<std::__1::allocator<hdfs::IoServiceImpl>, hdfs::IoServiceImpl, 1>' requested here
     [exec]     : private __libcpp_compressed_pair_imp<_T1, _T2>
     [exec]               ^
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:3816:36: note: in instantiation of template class 'std::__1::__compressed_pair<std::__1::allocator<hdfs::IoServiceImpl>, hdfs::IoServiceImpl>' requested here
     [exec]     __compressed_pair<_Alloc, _Tp> __data_;
     [exec]                                    ^
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4444:26: note: in instantiation of template class 'std::__1::__shared_ptr_emplace<hdfs::IoServiceImpl, std::__1::allocator<hdfs::IoServiceImpl> >' requested here
     [exec]     ::new(__hold2.get()) _CntrlBlk(__a2, _VSTD::forward<_Args>(__args)...);
     [exec]                          ^
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4810:29: note: in instantiation of function template specialization 'std::__1::shared_ptr<hdfs::IoServiceImpl>::make_shared<>' requested here
     [exec]     return shared_ptr<_Tp>::make_shared(_VSTD::forward<_Args>(__args)...);
     [exec]                             ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:38:15: note: in instantiation of function template specialization 'std::__1::make_shared<hdfs::IoServiceImpl>' requested here
     [exec]   return std::make_shared<IoServiceImpl>();
     [exec]               ^
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:19:
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.h:22:
     [exec] In file included from /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/include/hdfspp/ioservice.h:61:
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4447:28: error: assigning to 'std::__1::__shared_weak_count *' from incompatible type 'pointer' (aka 'std::__1::__shared_ptr_emplace<hdfs::IoServiceImpl, std::__1::allocator<hdfs::IoServiceImpl> > *')
     [exec]     __r.__cntrl_ = __hold2.release();
     [exec]                    ~~~~~~~~^~~~~~~~~
     [exec] /Library/Developer/CommandLineTools/usr/include/c++/v1/memory:4810:29: note: in instantiation of function template specialization 'std::__1::shared_ptr<hdfs::IoServiceImpl>::make_shared<>' requested here
     [exec]     return shared_ptr<_Tp>::make_shared(_VSTD::forward<_Args>(__args)...);
     [exec]                             ^
     [exec] /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/ioservice_impl.cc:38:15: note: in instantiation of function template specialization 'std::__1::make_shared<hdfs::IoServiceImpl>' requested here
     [exec]   return std::make_shared<IoServiceImpl>();
     [exec]               ^
     [exec] 9 errors generated.
     [exec] make[2]: *** [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/ioservice_impl.cc.o] Error 1
     [exec] make[1]: *** [main/native/libhdfspp/lib/common/CMakeFiles/common_obj.dir/all] Error 2
     [exec] make: *** [all] Error 2
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop HDFS Native Client ................... FAILURE [ 29.737 s]
[INFO] Apache Hadoop HttpFS ............................... SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................. SKIPPED
[INFO] Apache Hadoop HDFS-RBF ............................. SKIPPED
[INFO] Apache Hadoop HDFS Project ......................... SKIPPED
[INFO] Apache Hadoop YARN ................................. SKIPPED
[INFO] Apache Hadoop YARN API ............................. SKIPPED
[INFO] Apache Hadoop YARN Common .......................... SKIPPED
[INFO] Apache Hadoop YARN Registry ........................ SKIPPED
[INFO] Apache Hadoop YARN Server .......................... SKIPPED
[INFO] Apache Hadoop YARN Server Common ................... SKIPPED
[INFO] Apache Hadoop YARN NodeManager ..................... SKIPPED
[INFO] Apache Hadoop YARN Web Proxy ....................... SKIPPED
[INFO] Apache Hadoop YARN ApplicationHistoryService ....... SKIPPED
[INFO] Apache Hadoop YARN Timeline Service ................ SKIPPED
[INFO] Apache Hadoop YARN ResourceManager ................. SKIPPED
[INFO] Apache Hadoop YARN Server Tests .................... SKIPPED
[INFO] Apache Hadoop YARN Client .......................... SKIPPED
[INFO] Apache Hadoop YARN SharedCacheManager .............. SKIPPED
[INFO] Apache Hadoop YARN Timeline Plugin Storage ......... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Backend ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Common .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Client .... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Servers ... SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase Server 1.2  SKIPPED
[INFO] Apache Hadoop YARN TimelineService HBase tests ..... SKIPPED
[INFO] Apache Hadoop YARN Router .......................... SKIPPED
[INFO] Apache Hadoop YARN Applications .................... SKIPPED
[INFO] Apache Hadoop YARN DistributedShell ................ SKIPPED
[INFO] Apache Hadoop YARN Unmanaged Am Launcher ........... SKIPPED
[INFO] Apache Hadoop MapReduce Client ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Core ....................... SKIPPED
[INFO] Apache Hadoop MapReduce Common ..................... SKIPPED
[INFO] Apache Hadoop MapReduce Shuffle .................... SKIPPED
[INFO] Apache Hadoop MapReduce App ........................ SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer .............. SKIPPED
[INFO] Apache Hadoop MapReduce JobClient .................. SKIPPED
[INFO] Apache Hadoop Mini-Cluster ......................... SKIPPED
[INFO] Apache Hadoop YARN Services ........................ SKIPPED
[INFO] Apache Hadoop YARN Services Core ................... SKIPPED
[INFO] Apache Hadoop YARN Services API .................... SKIPPED
[INFO] Apache Hadoop YARN Site ............................ SKIPPED
[INFO] Apache Hadoop YARN UI .............................. SKIPPED
[INFO] Apache Hadoop YARN Project ......................... SKIPPED
[INFO] Apache Hadoop MapReduce HistoryServer Plugins ...... SKIPPED
[INFO] Apache Hadoop MapReduce NativeTask ................. SKIPPED
[INFO] Apache Hadoop MapReduce Uploader ................... SKIPPED
[INFO] Apache Hadoop MapReduce Examples ................... SKIPPED
[INFO] Apache Hadoop MapReduce ............................ SKIPPED
[INFO] Apache Hadoop MapReduce Streaming .................. SKIPPED
[INFO] Apache Hadoop Distributed Copy ..................... SKIPPED
[INFO] Apache Hadoop Archives ............................. SKIPPED
[INFO] Apache Hadoop Archive Logs ......................... SKIPPED
[INFO] Apache Hadoop Rumen ................................ SKIPPED
[INFO] Apache Hadoop Gridmix .............................. SKIPPED
[INFO] Apache Hadoop Data Join ............................ SKIPPED
[INFO] Apache Hadoop Extras ............................... SKIPPED
[INFO] Apache Hadoop Pipes ................................ SKIPPED
[INFO] Apache Hadoop OpenStack support .................... SKIPPED
[INFO] Apache Hadoop Amazon Web Services support .......... SKIPPED
[INFO] Apache Hadoop Kafka Library support ................ SKIPPED
[INFO] Apache Hadoop Azure support ........................ SKIPPED
[INFO] Apache Hadoop Aliyun OSS support ................... SKIPPED
[INFO] Apache Hadoop Client Aggregator .................... SKIPPED
[INFO] Apache Hadoop Scheduler Load Simulator ............. SKIPPED
[INFO] Apache Hadoop Resource Estimator Service ........... SKIPPED
[INFO] Apache Hadoop Azure Data Lake support .............. SKIPPED
[INFO] Apache Hadoop Image Generation Tool ................ SKIPPED
[INFO] Apache Hadoop Tools Dist ........................... SKIPPED
[INFO] Apache Hadoop Tools ................................ SKIPPED
[INFO] Apache Hadoop Client API ........................... SKIPPED
[INFO] Apache Hadoop Client Runtime ....................... SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants .......... SKIPPED
[INFO] Apache Hadoop Client Test Minicluster .............. SKIPPED
[INFO] Apache Hadoop Client Packaging Invariants for Test . SKIPPED
[INFO] Apache Hadoop Client Packaging Integration Tests ... SKIPPED
[INFO] Apache Hadoop Distribution ......................... SKIPPED
[INFO] Apache Hadoop Client Modules ....................... SKIPPED
[INFO] Apache Hadoop Cloud Storage ........................ SKIPPED
[INFO] Apache Hadoop Cloud Storage Project ................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 31.310 s
[INFO] Finished at: 2018-05-11T00:27:30-04:00
[INFO] Final Memory: 67M/557M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 2
[ERROR] around Ant part ...<exec failonerror="true" dir="/Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" executable="make">... @ 9:131 in /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 2
around Ant part ...<exec failonerror="true" dir="/Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" executable="make">... @ 9:131 in /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
	at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
	at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
	at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
	at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
	at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
	at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
	at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant BuildException has occured: exec returned: 2
around Ant part ...<exec failonerror="true" dir="/Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target" executable="make">... @ 9:131 in /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml
	at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
	at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
	... 20 more
Caused by: /Users/dc/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/target/antrun/build-main.xml:9: exec returned: 2
	at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:646)
	at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:672)
	at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:498)
	at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
	at org.apache.tools.ant.Task.perform(Task.java:348)
	at org.apache.tools.ant.Target.execute(Target.java:390)
	at org.apache.tools.ant.Target.performTasks(Target.java:411)
	at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
	at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
	at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:327)
	... 22 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
  I have done the following steps:  Installed XCode and Command Line Developer tools  Installed Protobuf 2.5.0  Followed this HCC article for all other installations: https://community.hortonworks.com/articles/36832/setting-hadoop-development-environment-on-mac-os-x.html  Clone github.com/apache/hadoop  cd /.../hadoop  mvn clean package -DskipTests  mvn package -Pdist -Pnative -Dtar -DskipTests  It appears like the C++ code is failing to compile.  Appreciate any help 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hadoop
 
			
    
	
		
		
		03-08-2018
	
		
		05:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Daniel Kozlowski  This feature in not supported in any version of HDP so far.  Here is the doc from the latest HDP-2.6.4  Refer section: 4.2.4 Using the Note Toolbar.  It says "Schedule the execution of all paragraphs using CRON syntax. This feature is not currently
operational. If you need to schedule Spark jobs, consider using Oozie Spark action."  We recently opened ZEPPELIN-3271 to provide a way to disable this feature to avoid risks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-16-2018
	
		
		03:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Rajesh K  There is no harm in starting up both the services and turning off maintenance mode.  Regarding your atlas service crashing everytime after startup, it could indicate multiple problems.  The most common one could be an out of memory error.  Could you check the logs and share the error stack trace ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-01-2017
	
		
		06:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 When running a custom Java application that connects via JDBC to Hive, after migration to HDP-2.6.x, the application now fails to start with a NoClassDefFoundError or ClassNotFoundException related to a Hive class, like:  Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hive/service/cli/thrift/TCLIService$Iface
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
  Root Cause  Prior to HDP-2.6.x, the hive-jdbc.jar is a symlink which points to the "standalone" jdbc jar (the one intended to be used for non-hadoop apps, like a generic app that has JDBC driver DB accessibility), for example in HDP 2.5.0:  /usr/hdp/current/hive-client/lib/hive-jdbc.jar -> hive-jdbc-1.2.1000.2.5.0.0-1245-standalone.jar  But from newer versions, HDP-2.6.x onwards, the hive-jdbc.jar now points to the "hadoop env" JDBC driver, which has dependencies on many other Hadoop JARs, for example in HDP 2.6.2:  /usr/hdp/current/hive-client/lib/hive-jdbc.jar -> hive-jdbc-1.2.1000.2.6.2.0-205.jar   or in HDP-2.6.3  /usr/hdp/current/hive-client/lib/hive-jdbc.jar -> hive-jdbc-1.2.1000.2.6.3.0-235.jar  Does this mean the HDP stack no longer includes a standalone JAR ? No.  The standalone jar has been moved to this path:  /usr/hdp/current/hive-client/jdbc  Two ways to solve this:  1. Change the custom Java application's classpath to use the hive-jdbc-*-standalone.jar explicitly  As noted above, the standalone jar is now available in a different path.  For example in HDP-2.6.2:  /usr/hdp/current/hive-client/jdbc/hive-jdbc-1.2.1000.2.6.2.0-205-standalone.jar
  In HDP-2.6.3  /usr/hdp/current/hive-client/jdbc/hive-jdbc-1.2.1000.2.6.3.0-235-standalone.jar  2. Add the following to the HADOOP_CLASSPATH of the custom Java application if it uses other Hadoop components/JARs  /usr/hdp/current/hive-client/lib/hive-metastore-*.jar:/usr/hdp/current/hive-client/lib/hive-common-*.jar:/usr/hdp/current/hive-client/lib/hive-cli-*.jar:/usr/hdp/current/hive-client/lib/hive-exec-*.jar:/usr/hdp/current/hive-client/lib/hive-service.jar:/usr/hdp/current/hive-client/lib/libfb303-*.jar:/usr/hdp/current/hive-client/lib/libthrift-*.jar:/usr/hdp/current/hadoop-client/lib/log4j*.jar:/usr/hdp/current/hadoop-client/lib/slf4j-api-*.jar:/usr/hdp/current/hadoop-client/lib/slf4j-log4j12-*.jar:/usr/hdp/current/hadoop-client/lib/commons-logging-*.jar 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-20-2017
	
		
		06:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You need to save the new data to a temp table and then read from that and overwrite into hive table.  cdc_data.write.mode("overwrite").saveAsTable("temp_table")  Then you can overwrite rows in your target table  val dy = sqlContext.table("temp_table")
dy.write.mode("overwrite").insertInto("senty_audit.temptable") 
						
					
					... View more