Member since
12-13-2016
72
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4489 | 12-27-2017 05:06 AM |
11-01-2020
06:47 AM
Nope @Shelton . Its not kerberized one normal sasl security protocol with sasl.mechanishm plain text only. But i get this error look like this i have tried new nifi version also; 2020-10-31 23:36:16,762 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-derfsdfdsf-2, groupId=derfsdfdsf] Connection to node -1 (xxxx:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. ----- 2020-10-30 05:41:18,794 WARN [Timer-Driven Process Thread-6] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-2, groupId=devtes_grp] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials.-
... View more
10-30-2020
02:01 AM
Hi, I'm trying to access kafka broker from nifi using consume_kafka_record_2.0 and kafka has been configured sasl_ssl plaintext manner data needs to consume; While connection on this we facing below stacktrace issue, can you please help me out from this issue; Processor configuration; 2020-10-30 08:52:38,470 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:39,427 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:40,331 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:41,235 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:42,441 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:43,597 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:44,501 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:45,605 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:46,520 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:47,526 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. 2020-10-30 08:52:48,536 WARN [Timer-Driven Process Thread-8] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-20, groupId=devtest_grp11111] Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
10-29-2020
11:27 PM
Hi, Im also facing same issue. can you please help how do resolve this issue
... View more
09-07-2020
06:37 AM
Hi, I'm referring this below article; https://community.cloudera.com/t5/Community-Articles/Create-Dynamic-Partitions-based-on-FlowFile-Content-Convert/ta-p/248367 I'm trying to create pipeline in nifi while data coming realtime streaming based say some example kafka, while data put in hdfs in partitioned location, it may ended be with many small files at the same while querying im facing performance lag issue; can you please give some apporaches to resolve small files issue in nifi itself with orc file format;
... View more
Labels:
06-22-2020
01:21 AM
Hi @hegdemahendra Thanks for the reply. Based on no of incoming flow files. first time incoming file 1 next time incoming file may be 2 or more than.
... View more
06-20-2020
02:16 AM
Hi, Im currently merge content processor with avro files, data will be coming streaming manner, we want to merge with existing file with new file. While i set minimum no of entries as 2 Is there any possiblities to set minimum no of entires as dynamic manner (dynamic no of input flow files).. Can you please help me out.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
05-03-2020
11:51 PM
I want to split and transfer the json data in NiFi, Here is my json structure look like this; I want to split json by id1,id2 array of json transfer to respective processor group say example processor_group a,b. I tried with evaluate json path $.id1,$.id2 but i didn't get exact solution. Can you please help me out from this issue; { "id1": [{ "u_name": "aa" }, { "addr": "bb" }], "id2": [{ "u_name": "aa" }, { "addr": "bb" }] }
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
03-04-2020
08:30 PM
@MattWho Thanks for the reply. Sure here i have attached my existing merge_content processor configuration and single node ec2 with 8GB of RAM. can you please clarify if flowfile size gets increase even TB's of data shall i processed with same approach multiple merge_content processor or if it is good with multi-node or increase single node maximum memory availability. JVM Configuration : # JVM memory settings java.arg.2=-Xms2048m java.arg.3=-Xmx2048m
... View more
03-04-2020
01:09 AM
Hi, I'm currently using merge_content processor as merge avro files from two sources like kafka_consumer processor and fetchHDFS file. While converting avro file into one with merge content processor yesterday around 680MB trying to convert but processor drop the file and join with new files and i can't able to recover that data also, because content_repository backup i limit. Can you please help me out for this use_case processor how much size can be good or is there any setting needs to modifiy in nifi.properties.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
-
HDFS
02-04-2020
04:47 AM
Hi,
We are currently use single node nifi server on aws ec2 instance, We are planning to move multi node cluster architecture. We are in development and testing stage. We need to setup as acvtive and passive nodes, which nifi should works fine whenever one of the server goes down another will pick. can anyone help us to move forward this scenerio or could you suggest any reference architure,
... View more
- Tags:
- NiFi
Labels:
- Labels:
-
Apache NiFi
12-15-2019
09:26 PM
Sure thanks. @MattWho. it works!
... View more
12-12-2019
09:27 PM
Hi, We are currently using merge content processor to merge kafka messages with "minimum number of entries" = 500. when 500 messages reached it will merge as single file. That use case works fine. whenever end of the day at 11:59, is there any queue pending in merge content processor of kafka messages needs to dynamically pushed before start to the new date. Kindly help me out from this kind of use case. Workflow : kafka_consumer -> merge_content_prcessor -> puthdfs @mburgess
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
09-04-2018
03:18 PM
Hi, We were processing realtime data pushing every minute, i want to pull streaming data from elasticsearch like every second. can we give me suggestion. How could i collect streaming data from elasticsearch. After i collect the data from elasticsearch i need to apply machine intelligence with python support. Can please help me??
... View more
02-14-2018
05:07 AM
sorry i wont able get update attribute value to puthdfs file. i have extracted value as you mentioned below https://community.hortonworks.com/questions/170847/how-to-extract-query-param-from-nifi.html Pls suggest me!! thanks
... View more
02-13-2018
07:53 AM
Hi, I have tried more a days on that particular issue, I had issue when i extract based on regexp, i got data in data providence flow file. My aim is After extract text processor -> needs to append the extracted value to filename. Normally filename propertiy added filename using update attribute. Please give me suggestion how do i append the values to puthdfs file. It would much helpful!!! Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
02-06-2018
11:15 AM
1 Kudo
Hi, I had been automate get request based on user query param using nifi with help of InvokeHttp.Here is the example; http://aaa.com/q="bigdata"&api_key="" http://aaa.com/q="apple"&api_key="" I did split text using line by line read and invoke http processor using fetch those response as json format. My question is before invokehttp processor, i want to extract those query param values those query values are static. I need to know which query param using i got response. Please give me some suggestion. It would much helpful. Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
01-24-2018
04:17 PM
@Matt Clarke I don't know which method using fetch Google results either Json/xml format. I'm asking is there any processor available to fetch Google results based on users keywords like Twitter processor.
... View more
01-24-2018
09:55 AM
Hi, I'm working on nifi in hdp 2.5. There is a new requirement normally user entered query/keywords in google and fetch those results either json/xml data format. I need to automate those process, But i didn't able to automate fetch google reuslt using Nifi. Is it possible to do or is there any ways to fetch results like web scrapping instead of programming with help of Nifi. Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
12-27-2017
05:06 AM
I think permission denied issue. Try to make readwrite permission to batch file. Either by using command line or gui. I think below command will able to useful; icacls "C:\Program Files (x86)\Program File" /grant Everyone:M Like that mention your nifi file location. I hope it helps!!!
... View more
09-03-2017
07:48 AM
In our cluster each node has 16 gb memory. and also set in cli java heap size. still i can't able to resolve set hive.tez.container.size=4096;
set hive.tez.java.opts=-Xmx3280m;
set tez.runtime.io.sort.mb=1640;
set tez.runtime.unordered.output.buffer.size-mb=410; Mapper 2 initialied failed.
... View more
09-03-2017
06:04 AM
Hi, I'm using HDP 2.5 in our server with 2 nodes. I'm running query successfully in hive. Suddenly I'm facing mapping with source table to add column to my new table, By this below query; I got vertext failled error. Please find my log hive-error.txt while running this query in hive view. Please tell me how do i resolve this issue. create table New_table
As
select distinct
ab.id,
ab.first_name,
ab.middle_name,
ab.last_name,
b.Address,
b.City_Name,
b.State_Name
from Temp_table ab join Source_table b
on (ab.id=b.id);
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Tez
07-19-2017
09:46 AM
Hi, I cluster 2 nodes with each node 16GB Ram and 2-CPU Cores and yarn memory has been configured 8GB as recommended. I have set those configuration in tez as below mentioned link; https://community.hortonworks.com/articles/22419/hive-on-tez-performance-tuning-determining-reducer.html I don't know how to configure as my hardware configuration. I'm over process data around 6million record. It takes much time. Please tell me how do i configure to execute as faster/increase performance. Thanks in advance! Regards, Varun
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
-
Apache YARN
05-17-2017
05:36 AM
Hi,
I have completely cluster setup master node and data node using ambari-repo. After I have implement, we suddenly face master node host disk usage alert has 100%. After i check df -h; [root@master001 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_master001-lv_root 50G 48G 0 100% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 485M 32M 428M 7% /boot
/dev/mapper/vg_master001-lv_home 402G 13G 368G 4% /home That above storage system /home has configured 402G. is it possible to readuce space from /home to /usr. without data loss. Due to this issue postgresDB,ambari-server, hdfs,hive doesn't started. Please tell me how do i resolve this issue?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
04-25-2017
03:30 PM
1 Kudo
Thanks all for your responses. Once again i reassign ownership. It works!!! ## hdfs dfs -chown -R admin:hadoop /user/admin
... View more
04-21-2017
09:14 AM
Hi,
When i try to create database in hive view, I got below log in hive notification box; I have already created/gave permission to user/admin using this doc. I did also permission to hdfs,hive. But i can't able to resolve this issue. I think after enabling ranger it doesn't work. Please tell me how to i resolve this issue
E090 HDFS020 Could not write file /user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql [HdfsApiException]
org.apache.ambari.view.utils.hdfs.HdfsApiException: HDFS020 Could not write file /user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql
org.apache.ambari.view.utils.hdfs.HdfsApiException: HDFS020 Could not write file /user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql
at org.apache.ambari.view.utils.hdfs.HdfsUtil.putStringToFile(HdfsUtil.java:51)
at org.apache.ambari.view.hive2.resources.jobs.viewJobs.JobControllerImpl.setupQueryFile(JobControllerImpl.java:250)
at org.apache.ambari.view.hive2.resources.jobs.viewJobs.JobControllerImpl.setupQueryFileIfNotPresent(JobControllerImpl.java:178)
at org.apache.ambari.view.hive2.resources.jobs.viewJobs.JobControllerImpl.afterCreation(JobControllerImpl.java:164)
at org.apache.ambari.view.hive2.resources.jobs.viewJobs.JobResourceManager.create(JobResourceManager.java:56)
at org.apache.ambari.view.hive2.resources.jobs.JobService.create(JobService.java:522)
at sun.reflect.GeneratedMethodAccessor391.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:257)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.jwt.JwtAuthenticationFilter.doFilter(JwtAuthenticationFilter.java:96)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.apache.ambari.server.security.authentication.AmbariAuthenticationFilter.doFilter(AmbariAuthenticationFilter.java:88)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.view.AmbariViewsMDCLoggingFilter.doFilter(AmbariViewsMDCLoggingFilter.java:54)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.view.ViewThrottleFilter.doFilter(ViewThrottleFilter.java:161)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:139)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:984)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1045)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:236)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=admin, access=WRITE, inode="/user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql":root:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2598)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2533)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2417)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:509)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:487)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:113)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:936)
at org.apache.ambari.view.utils.hdfs.HdfsUtil.putStringToFile(HdfsUtil.java:48)
... 101 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=admin, access=WRITE, inode="/user/admin/hive/jobs/hive-job-82-2017-04-21_10-02/query.hql":root:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2598)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2533)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2417)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:118)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:477)
... 104 more
... View more
- Tags:
- Hadoop Core
- hdfs-permissions
- hive-views
- Ranger
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Ranger
02-22-2017
09:59 AM
2 Kudos
I'm using HDP 2.5. The zeppelin service running successfully, when i create a zeppelin notebook with sample data, For eg; list of data available like this below; id name specialisation city state 001 xxx Android Bronx NY 002 yyy ROR Rome NY 003 zzz IOS Bronx NY 004 ppp Bigdata Dallas TX 005 qqq Android Dallas TX In pie chart is display list of states by using below query %sql select state,count(1) AS states from sample_data where state!='' GROUP BY state ORDER BY states DESC In sqltable is display list of column by using using below query; %sql select id,name,specialisation,city,state from sample_data My question, When i select particular state from this pie chart, For eg TX, The sql table is automatically filtered with selected portion of pie chart. Please tell me how do i make this feature in zeppelin.
... View more
- Tags:
- Data Science & Advanced Analytics
- hdp-2.5.0
- Hive
- spark-sql
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
- zeppelin
Labels:
02-16-2017
04:28 AM
It works!!!!
... View more
02-15-2017
04:44 PM
1 Kudo
Hi, I'm using HDP 2.5. I'm beginner into this zeppelin notebook. I have been worked in hue,banana,kibana dashboard. Those dashboards have more dynamic data changes in every widgets like pie,table,map,bar chart. My question is how to i implement like these feature in zeppelin notebook. For eg, if click pie chart one portion it will affect entire notebook. Please tell me how to use zeppelin notebook more effective like this above dashboards and tell me any reference liks.
... View more
Labels:
02-15-2017
04:35 PM
Thanks for response. I will try this.I have regex like 's/","/\|/g; s/"//g', How do i use in this replace text processor.
... View more