Member since
02-22-2017
33
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
638 | 10-28-2016 09:38 AM |
07-17-2018
06:14 AM
Hello, community! While we using livy2 interpreter we have an issue with russian cyrillic symbols. they are displayed like '?????' We have read this two bugs, and tried to implement the fixes, for example we have installed last release in 9.0 branch and from 8.0 branch and result still the same: https://issues.apache.org/jira/browse/ZEPPELIN-2641 https://issues.apache.org/jira/browse/ZEPPELIN-3099 Please help us to solve the issue. Regarads, Ramil
... View more
Labels:
04-21-2018
09:12 AM
In this topic described how to set kudusession rpc timeout: https://community.cloudera.com/t5/Beta-Releases-RecordService/KUDU-RPC-TIMEOUT/td-p/47289 But this is in consern of kudu session, but we have in our project kuducontext, how can we change kuducontext session timeout?
... View more
04-18-2018
07:21 AM
We have an application that reads messages from specific kafka topics, and process it, and when it reads message from topic it puts offset to the HBase table. after some amount of working application fails (time varries from 30 minutes to 15 hours ), in the driver stderr we see following log entries: 18/04/17 17:31:15 WARN client.AsyncProcess: #3121, the task was rejected by the pool. This is unexpected. Server is ***hostname masked***,60020,1523949367813 java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3f377224 rejected from java.util.concurrent.ThreadPoolExecutor@639d4dae[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.sendMultiAction(AsyncProcess.java:1013) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$000(AsyncProcess.java:600) at org.apache.hadoop.hbase.client.AsyncProcess.submitMultiActions(AsyncProcess.java:449) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:429) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098) And after some amount of time this ERRORS: 18/04/17 17:31:15 ERROR client.AsyncProcess: Cannot get replica 0 location for {"totalColumns":1,"row":"predictor_passport_ru_number_gold","families":{"cf":[{"qualifier":"\x00\x00\x00\x00","vlen":8,"tag":[],"timestamp":9223372036854775807}]}} 18/04/17 17:31:15 ERROR spark.Utils: Error saving offsets [OffsetRange(topic: 'predictor_passport_ru_number_gold', partition: 0, range: [2536631 -> 2536718])] org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1766) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098) In the HBase logs I see an gap in messages on that period of time you can see this on attached screenshot - memstoreflush.png In addition full log of driver in index.zip. Please help to investigate and solve the issue.
... View more
Labels:
04-18-2018
12:14 AM
We have an application that reads messages from specific kafka topics, and process it, and when it reads message from topic it puts offset to the HBase table.
after some amount of working application fails (time varries from 30 minutes to 15 hours ), in the driver stderr we see following log entries:
18/04/17 17:31:15 WARN client.AsyncProcess: #3121, the task was rejected by the pool. This is unexpected. Server is ***hostname masked***,60020,1523949367813 java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3f377224 rejected from java.util.concurrent.ThreadPoolExecutor@639d4dae[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.sendMultiAction(AsyncProcess.java:1013) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$000(AsyncProcess.java:600) at org.apache.hadoop.hbase.client.AsyncProcess.submitMultiActions(AsyncProcess.java:449) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:429) at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098)
And after some amount of time this ERRORS:
18/04/17 17:31:15 ERROR client.AsyncProcess: Cannot get replica 0 location for {"totalColumns":1,"row":"predictor_passport_ru_number_gold","families":{"cf":[{"qualifier":"\\x00\\x00\\x00\\x00","vlen":8,"tag":[],"timestamp":9223372036854775807}]}} 18/04/17 17:31:15 ERROR spark.Utils: Error saving offsets [OffsetRange(topic: 'predictor_passport_ru_number_gold', partition: 0, range: [2536631 -> 2536718])] org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1766) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1098)
In the HBase logs I see an gap in messages on that period of time:
Please help to investigate and solve the issue.
... View more
12-19-2017
03:34 AM
Hi, was You able to fix the issue? We have the same problem.
... View more
04-04-2017
03:53 PM
This event occurs only if we are using NiFi Hive Streaming. ls -R will later.
... View more
04-04-2017
01:58 PM
Hi All, Periodically, in some ORC tables in Hive we get duplicate partition "base" directory inside /table_name/partition_date=/base/ meaning: all contents of /table_name/partition_date=/base/* are in /table_name/partition_date=/base/base/*. After that partition become bad and from this bad partition we can’t do select count(*) or any other selects because of error occurring. But when we dropping duplicate “base” directory problem goes away. Why we got this duplicate folder in our buckets?
... View more
Labels:
04-04-2017
07:03 AM
Hi All, Periodically, in some ORC tables in Hive we get duplicate partition "base" directory inside /table_name/partition_date=/base/ meaning: all contents of /table_name/partition_date=/base/* are in /table_name/partition_date=/base/base/*. After that partition become bad and from this bad partition we can’t do select count(*) or any other selects because of error occurring. But when we dropping duplicate “base” directory problem goes away. Why we got this duplicate folder in our buckets?
... View more
Labels:
03-31-2017
02:21 PM
Our NiFi is co-located with other Hadoop components.
This is physical servers. 24 Cores per machine.templates.zip Zookeper is separate but on this machines. Errors on DetectDuplicate processor are symptoms of this issue. Socket Timeouts too. We have 3 Process Group on Our NiFi Cluster, their templates are in attachment.
... View more
03-31-2017
12:49 PM
Our NiFi have 8 Gb of heap, NiFi version is 1.1.0.2.
... View more
03-29-2017
10:17 AM
nifi-app.zipAfter working one weak our NiFi cluster become very unstable. Nodes are disconnect and reconnect every 5 - 30 minutes, processors don't work fine too. Restarting all 3 nodes solve the issue. Restarting NiFi weakly is not a good solution but we can work only with this approach. Example of log file from one of the node in attachment.
... View more
Labels:
03-16-2017
02:50 AM
Hi, explain statement is very huge so You can download it from our share: https://drive.croc.ru/display/data/list?dataId=c43e16e0-e0af-40f1-935e-1c44e4b01f91 login: 024741 password: E804F9487956
... View more
03-15-2017
07:03 AM
Hi, Tried to increase this property 10x but no results. Regards, Ramil.
... View more
03-13-2017
02:09 AM
Attachments mentioned above can be found on https://drive.croc.ru/display/data/list?dataId=02745bf5-e54d-47a9-8797-15f108fc057e login: 024556 password: 0B908ECFE563
... View more
- Tags:
- Hive
03-13-2017
02:09 AM
Attachments mentioned above can be found on https://drive.croc.ru/display/data/list?dataId=02745bf5-e54d-47a9-8797-15f108fc057e login: 024556 password: 0B908ECFE563
... View more
03-13-2017
02:03 AM
Hi Community, We have a script - collmx_consents_snp.hql (in attachement). In this script we use join of following tables: consent_service_consent_hst consent_service_consent_subject_hst consent_service_client_hst DDL of those table are in attachment too. All of this tables are partitioned by date. The root cause of problem is that join of two tables in production don’t working(there is consistent data in tables but query don’t get nothing): select * from ( SELECT consent_uid,CASE WHEN for_contract = true THEN evid_srv ELSE NULL END evid_srv,entity_type,to_date(modif_time) apply_date,id_client FROM consent_service_consent_snp ) csc join consent_service_consent_subject_snp cscs on (csc.consent_uid = cscs.consent_uid) In test environment all fine. When we add some filter on table consent_service_consent_snp by partition, than query giva us results: When we are running select count(*) on this tables, we didn’t get any errors. In our test environment we have less data than in production. And when we add constraint on date in select clause all working fine, so we think that problem may depend on number of rows in the table. Logs of HiveServer2 and HiveMetastore in attachement. When query is fails we see following in hiveserver2 log: 2017-03-09 20:00:30,406 INFO org.apache.hadoop.hive.ql.plan.ConditionalResolverCommonJoin: [HiveServer2-Background-Pool: Thread-7269]: Failed to resolve driver alias (threshold : 25000000, length mapping : {cscs:consent_service_consent_subject_hst=571829172, csc:consent_service_consent_snp:consent_service_consent_hst=434475747})
... View more
Labels:
02-22-2017
02:37 AM
After upgrade to CDH 5.10.0 we get the following error in alert configuration page: Server Error A server error has occurred. Send the following information to Cloudera. *Path: http://os-2377.homecredit.ru:7180/cmf/alerts/config* Version: Cloudera Express 5.10.0 (#85 built by jenkins on 20170120-1037 git: aa0b5cd5eceaefe2f971c13ab657020d96bb842a) java.lang.NullPointerException: at AlertData.java line 127 in com.cloudera.server.web.cmf.AlertData isParamSpecEnabled()h3. Stack Trace: AlertData.java line 127 in com.cloudera.server.web.cmf.AlertData isParamSpecEnabled() AlertData.java line 105 in com.cloudera.server.web.cmf.AlertData writeBooleanConfigDescription() ServiceAlertData.java line 161 in com.cloudera.server.web.cmf.ServiceAlertData writeHealthDescriptions() ServiceAlertData.java line 64 in com.cloudera.server.web.cmf.ServiceAlertData <init>() AlertController.java line 65 in com.cloudera.server.web.cmf.AlertController alertConfigView() <generated> line -1 in com.cloudera.server.web.cmf.AlertController$$FastClassByCGLIB$$46f71363 invoke() MethodProxy.java line 191 in net.sf.cglib.proxy.MethodProxy invoke() Cglib2AopProxy.java line 688 in org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation invokeJoinpoint() ReflectiveMethodInvocation.java line 150 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed() MethodSecurityInterceptor.java line 61 in org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor invoke() ReflectiveMethodInvocation.java line 172 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed() Cglib2AopProxy.java line 621 in org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor intercept() <generated> line -1 in com.cloudera.server.web.cmf.AlertController$$EnhancerByCGLIB$$4b583607 alertConfigView() NativeMethodAccessorImpl.java line -2 in sun.reflect.NativeMethodAccessorImpl invoke0() NativeMethodAccessorImpl.java line 62 in sun.reflect.NativeMethodAccessorImpl invoke() DelegatingMethodAccessorImpl.java line 43 in sun.reflect.DelegatingMethodAccessorImpl invoke() Method.java line 497 in java.lang.reflect.Method invoke() HandlerMethodInvoker.java line 176 in org.springframework.web.bind.annotation.support.HandlerMethodInvoker invokeHandlerMethod() AnnotationMethodHandlerAdapter.java line 436 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter invokeHandlerMethod() AnnotationMethodHandlerAdapter.java line 424 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter handle() DispatcherServlet.java line 790 in org.springframework.web.servlet.DispatcherServlet doDispatch() DispatcherServlet.java line 719 in org.springframework.web.servlet.DispatcherServlet doService() FrameworkServlet.java line 669 in org.springframework.web.servlet.FrameworkServlet processRequest() FrameworkServlet.java line 574 in org.springframework.web.servlet.FrameworkServlet doGet() HttpServlet.java line 707 in javax.servlet.http.HttpServlet service() HttpServlet.java line 820 in javax.servlet.http.HttpServlet service() ServletHolder.java line 511 in org.mortbay.jetty.servlet.ServletHolder handle() ServletHandler.java line 1221 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() UserAgentFilter.java line 78 in org.mortbay.servlet.UserAgentFilter doFilter() GzipFilter.java line 131 in org.mortbay.servlet.GzipFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() JAMonServletFilter.java line 48 in com.jamonapi.http.JAMonServletFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() JavaMelodyFacade.java line 109 in com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() FilterChainProxy.java line 311 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() FilterSecurityInterceptor.java line 116 in org.springframework.security.web.access.intercept.FilterSecurityInterceptor invoke() FilterSecurityInterceptor.java line 83 in org.springframework.security.web.access.intercept.FilterSecurityInterceptor doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() ExceptionTranslationFilter.java line 113 in org.springframework.security.web.access.ExceptionTranslationFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() SessionManagementFilter.java line 101 in org.springframework.security.web.session.SessionManagementFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() AnonymousAuthenticationFilter.java line 113 in org.springframework.security.web.authentication.AnonymousAuthenticationFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() RememberMeAuthenticationFilter.java line 146 in org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() SecurityContextHolderAwareRequestFilter.java line 54 in org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() RequestCacheAwareFilter.java line 45 in org.springframework.security.web.savedrequest.RequestCacheAwareFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() AbstractAuthenticationProcessingFilter.java line 182 in org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() LogoutFilter.java line 105 in org.springframework.security.web.authentication.logout.LogoutFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() SecurityContextPersistenceFilter.java line 87 in org.springframework.security.web.context.SecurityContextPersistenceFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() ConcurrentSessionFilter.java line 125 in org.springframework.security.web.session.ConcurrentSessionFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() FilterChainProxy.java line 173 in org.springframework.security.web.FilterChainProxy doFilter() DelegatingFilterProxy.java line 237 in org.springframework.web.filter.DelegatingFilterProxy invokeDelegate() DelegatingFilterProxy.java line 167 in org.springframework.web.filter.DelegatingFilterProxy doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() CharacterEncodingFilter.java line 88 in org.springframework.web.filter.CharacterEncodingFilter doFilterInternal() OncePerRequestFilter.java line 76 in org.springframework.web.filter.OncePerRequestFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() ServletHandler.java line 399 in org.mortbay.jetty.servlet.ServletHandler handle() SecurityHandler.java line 216 in org.mortbay.jetty.security.SecurityHandler handle() SessionHandler.java line 182 in org.mortbay.jetty.servlet.SessionHandler handle() SecurityHandler.java line 216 in org.mortbay.jetty.security.SecurityHandler handle() ContextHandler.java line 767 in org.mortbay.jetty.handler.ContextHandler handle() WebAppContext.java line 450 in org.mortbay.jetty.webapp.WebAppContext handle() HandlerWrapper.java line 152 in org.mortbay.jetty.handler.HandlerWrapper handle() StatisticsHandler.java line 53 in org.mortbay.jetty.handler.StatisticsHandler handle() HandlerWrapper.java line 152 in org.mortbay.jetty.handler.HandlerWrapper handle() Server.java line 326 in org.mortbay.jetty.Server handle() HttpConnection.java line 542 in org.mortbay.jetty.HttpConnection handleRequest() HttpConnection.java line 928 in org.mortbay.jetty.HttpConnection$RequestHandler headerComplete() HttpParser.java line 549 in org.mortbay.jetty.HttpParser parseNext() HttpParser.java line 212 in org.mortbay.jetty.HttpParser parseAvailable() HttpConnection.java line 404 in org.mortbay.jetty.HttpConnection handle() SelectChannelEndPoint.java line 410 in org.mortbay.io.nio.SelectChannelEndPoint run() QueuedThreadPool.java line 582 in org.mortbay.thread.QueuedThreadPool$PoolThread run()
... View more
Labels:
12-22-2016
01:47 PM
We are used oozie command line utility.
... View more
12-02-2016
04:12 PM
1 Kudo
Unfortunately we still haven't solution. Our version of HDP is the same.
... View more
10-28-2016
09:38 AM
Issue was solved by my self. The solutin was: 1) under folder in which workflow.xml is create folder lib and put there all hive jar files from sharedlibDir(/user/oozie/share/lib/lib_20160928171540)/hive; 2) Create hive-site.xml with contents: <configuration>
<property>
<name>ambari.hive.db.schema.name</name>
<value>hive</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://xxxxx:9083</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>xxxx:2181,yyyyy:2181,zzzzzz:2181</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/smartdata/hive/</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://xxxxx:5432/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
</configuration>
and put it on hdfs for example in /tmp/hive-site.xml 3) Add following line in workflow.xml: <file>/tmp/hive-site.xml</file> This solved my issue.
... View more
10-27-2016
09:25 AM
Hello thanks for advice but all fine with shared libraries: $ oozie admin -oozie http://localhost:11000/oozie -shareliblist
[Available ShareLib]
hive
distcp
mapreduce-streaming
spark
oozie
hcatalog
hive2
sqoop
pig
spark_orig $ oozie admin -oozie http://localhost:11000/oozie -sharelibupdate
[ShareLib update status]
sharelibDirOld = hdfs://os-2471.homecredit.ru:8020/user/oozie/share/lib/lib_20160928171540
host = http://localhost:11000/oozie
sharelibDirNew = hdfs://os-2471.homecredit.ru:8020/user/oozie/share/lib/lib_20160928171540
status = Successful $ oozie admin -oozie http://localhost:11000/oozie -shareliblist
[Available ShareLib]
hive
distcp
mapreduce-streaming
spark
oozie
hcatalog
hive2
sqoop
pig
spark_orig On Resource manager UI all fine, see attached logs.
... View more
10-26-2016
12:47 PM
resource-manager-ui.txtHello, Our HDP version 2.5 When we trying to run sqoop action(to load data from oracle to hive) from oozie we get folowing error in /var/log/oozie/oozie-error.log: JOB[0000004-161024200820785-oozie-oozi-W] ACTION[0000004-161024200820785-oozie-oozi-W@sqoop] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] And there is nothing more usefull for diagnostic. Job.properties file listed below: # properties nameNode = hdfs://xxxxx:8020 resourceManager = xxxx:8050 queueName=default oozie.use.system.libpath=true oozie.wf.application.path = hdfs://xxxxxx:8020/smartdata/oozie/hive_test.xml mapreduce.framework.name = yarn When we running this job from command line with "sqoop ..... " as command all working fine. Please some one tell me how to solve or troubleshoot this.
... View more
Labels:
10-18-2016
02:23 PM
2 Kudos
After enabling HA on our hdp cluster we get following error from oozie when configuring workflow from ambari workflow manager: 2016-10-18 17:17:29,956 WARN V1JobsServlet:523 - SERVER[hdp-name1.lab.croc.ru] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] URL[POST http://hdp-name1.lab.croc.ru:11000/oozie/v2/jobs] user error, java.net.UnknownHostException: null java.lang.IllegalArgumentException: java.net.UnknownHostException: null at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386) at org.apache.oozie.service.HadoopAccessorService$4.run(HadoopAccessorService.java:577) at org.apache.oozie.service.HadoopAccessorService$4.run(HadoopAccessorService.java:575) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.oozie.service.HadoopAccessorService.createFileSystem(HadoopAccessorService.java:575) at org.apache.oozie.service.AuthorizationService.authorizeForApp(AuthorizationService.java:374) at org.apache.oozie.servlet.BaseJobServlet.checkAuthorizationForApp(BaseJobServlet.java:260) at org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:99) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at org.apache.oozie.servlet.JsonRestServlet.service(JsonRestServlet.java:304) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.oozie.servlet.AuthFilter$2.doFilter(AuthFilter.java:171) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:614) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:573) at org.apache.oozie.servlet.AuthFilter.doFilter(AuthFilter.java:176) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.oozie.servlet.HostnameFilter.doFilter(HostnameFilter.java:86) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.oozie.servlet.OozieXFrameOptionsFilter.doFilter(OozieXFrameOptionsFilter.java:48) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.oozie.servlet.OozieCSRFFilter.doFilter(OozieCSRFFilter.java:62) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:620) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.UnknownHostException: null ... 50 more Which configuration changes we are missing?
... View more
Labels:
10-17-2016
08:05 AM
Is the any workaround of this? or some hot fix?
... View more
10-14-2016
06:58 PM
1 Kudo
After we enabled HDFS HA PutHiveStreaming processor in our NiFi stopped working and generate following errors: 2016-10-14 21:50:53,840 WARN [Timer-Driven Process Thread-6] o.a.n.processors.hive.PutHiveStreaming PutHiveStreaming[id=01571000-c4de-1bfd-0f09-5c439230e84e] Processor Administratively Yielded for 1 sec due to processing failure 2016-10-14 21:50:53,840 WARN [Timer-Driven Process Thread-6] o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding PutHiveStreaming[id=01571000-c4de-1bfd-0f09-5c439230e84e] due to uncaught Exception: java.lang.IllegalArgumentException: java.net.UnknownHostException: hdpCROC 2016-10-14 21:50:53,847 WARN [Timer-Driven Process Thread-6] o.a.n.c.t.ContinuallyRunProcessorTask java.lang.IllegalArgumentException: java.net.UnknownHostException: hdpCROC at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411) ~[na:na] at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311) ~[na:na] at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) ~[na:na] at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688) ~[na:na] at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629) ~[na:na] at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159) ~[na:na] at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761) ~[na:na] at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99) ~[na:na] at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795) ~[na:na] at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777) ~[na:na] at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386) ~[na:na] at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) ~[na:na] at org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.<init>(OrcRecordUpdater.java:234) ~[na:na] at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:289) ~[na:na] at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:253) ~[na:na] at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdaters(AbstractRecordWriter.java:245) ~[na:na] at org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:189) ~[na:na] at org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:41) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:607) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:555) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:441) ~[na:na] at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:421) ~[na:na] at org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$7(HiveWriter.java:250) ~[na:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] Caused by: java.net.UnknownHostException: hdpCROC hdpCROC - our HDP cluster and dfs.servicenames property. All files such as hive-site.xml, hdfs-site.xml, hdfs-core.xml are actual. What can cause this issue?
... View more
Labels:
10-13-2016
08:43 AM
Timothy, can You please explain why I shouldn't use HDP Zookeeper? What kind of problem I will get if I will use HDP Zookeeper and other modules such as Storm, Kafka and Ranger?
... View more
10-12-2016
02:16 PM
1 Kudo
Hello, We want to create NiFi Cluster with HA but have only two nodes, and because we have installed 3 Zookeepers on HDP Cluster, we want to use them in NiFi configuration. Is it possible to configure cluster as I described above? Will we have split brain issues in this configuration? Regards, Ramil.
... View more
Labels:
10-07-2016
04:11 PM
Thank You for quick reply, can You tell me please where can I get consumeKafka_o_10 nifi processor?
... View more
10-07-2016
02:49 PM
1 Kudo
When we trying to use getkafka we see following error: 2016-10-07 17:37:39,469 INFO [pool-24-thread-1-EventThread] org.I0Itec.zkclient.ZkClient zookeeper state changed (Expired) 2016-10-07 17:37:39,470 INFO [ZkClient-EventThread-465-hdp-name1.lab.croc.ru:2181] k.consumer.ZookeeperConsumerConnector [95446e62-0157-1000-7951-fd4244e9aec2_###############-1475841346967-f0d261ce], exception during rebalance kafka.common.KafkaException: Failed to parse the broker info from zookeeper: {"jmx_port":-1,"timestamp":"1475501559373","endpoints":["PLAINTEXT://############:6667"],"host":"#############","version":3,"port":6667} next we see: Caused by: kafka.common.KafkaException: Unknown version of broker registration. Only versions 1 and 2 are supported.{"jmx_port":-1,"timestamp":"1475501559373","endpoints":["PLAINTEXT://#########:6667"],"host":"##########","version":3,"port":6667} Our hdp version is 2.5 and hdf version is 2.0.
... View more
Labels: