Member since
06-13-2016
76
Posts
13
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
994 | 08-09-2017 06:54 PM | |
1364 | 05-03-2017 02:25 PM | |
1697 | 03-28-2017 01:56 PM | |
2080 | 09-26-2016 09:05 PM | |
1553 | 09-22-2016 03:49 AM |
10-02-2020
04:05 PM
I am not using Kerberos. I am using username/password as method for auth. But I am getting the same exact error: [Cloudera][ThriftExtension] (8) Authentication/authorization error occurred. Error details: Bad status with no error message: Unauthorized/Forbidden: Status code : 401 On the gateway logs, it shows: 20/10/02 18:04:34 ||2bf99023-8397-4c4c-86b1-43f5a0ab5a39|audit|HIVE||||access|uri|/gateway/default/hive|unavailable|Request method: POST 20/10/02 18:04:35 ||2bf99023-8397-4c4c-86b1-43f5a0ab5a39|audit|HIVE||||access|uri|/gateway/default/hive|success|Response status: 401 Any ideas?
... View more
08-10-2020
08:32 PM
1 Kudo
I had similar issue today, where Nifi server was up, but UI kept spinning. I did clear browser cache and restarted the server, and browser. The UI came back fine. Just mentioning it for others, who may have similar issue,
... View more
09-13-2017
05:19 AM
1 Kudo
@mliem this looks like authorization issue. we need to add ACLs for user alice
... View more
08-09-2017
05:29 AM
My first suggestion would be that another process is already using port 8080. You can check using netstat -na|grep LISTEN|grep 8080
... View more
06-30-2017
05:21 AM
1 Kudo
@mliem, The basic search in Atlas can be used to search using parent tag and all entities tagged with parent tag and all its sub tag will be searched. The following API can be used. Hope this helps. http://localhost:21000/api/atlas/v2/search/basic?limit=25&excludeDeletedEntities=true&typeName=hive_table&classification=PII
... View more
06-30-2017
05:18 AM
1 Kudo
@mliem Did you copy the flow.xml.gz from your old installation to this one after you wiped everything 2.x related and installed 3. All the sensitive properties inside the flow.xml.gz file are encrypted using the sensitive property defined in the nifi.properties file (If blank, NiFi uses an internal default value). If you move your flow.xml.gz file to another NiFi, the sensitive property value used must be the same or NiFi will fail to start because it cannot decrypt the sensitive properties in the file.
... View more
05-04-2017
07:28 PM
Glad you were able to figure it out @mliem
... View more
03-28-2017
05:29 PM
Thanks matt for your guidance.
... View more
02-10-2017
08:25 PM
@Matt Clarke Thanks Matt, very useful info. It was about 20 tar files, which turned into almost 1000 individual files that I was looking to ZIP back to 20 files. Looks like the major problem was the bin #. It was set to 1, once I increased that it had no problem with the multiple tar files that were queued up. I only had 1 concurrent tasks so I was surprised that even with 1 bin, it would look to create a new bin. For selected prioritizers it was the default " first in first out", so if its untaring one tar file at a time it should finish a whole bin before moving to the next one.
... View more
10-04-2017
07:37 PM
Just commenting on this for future visitors to this post: Only files that end in ".jar" are picked up by the Driver Class Loader, Here's the relevant source code from DBCPConnectionPool.java protected ClassLoader getDriverClassLoader(String locationString, String drvName) throws InitializationException { if (locationString != null && locationString.length() > 0) { try { // Split and trim the entries final ClassLoader classLoader = ClassLoaderUtils.getCustomClassLoader( locationString, this.getClass().getClassLoader(), (dir, name) -> name != null && name.endsWith(".jar")
... View more
11-25-2016
05:43 AM
@Sunile Manjee Followed the above and getting: An error occurred while establishing the connection:
Long Message:
Remote driver error: RuntimeException: java.sql.SQLFeatureNotSupportedException -> SQLFeatureNotSupportedException: (null exception message)
Details:
Type: org.apache.calcite.avatica.AvaticaClientRuntimeException
Stack Trace:
AvaticaClientRuntimeException: Remote driver error: RuntimeException: java.sql.SQLFeatureNotSupportedException -> SQLFeatureNotSupportedException: (null exception message). Error -1 (00000) null
java.lang.RuntimeException: java.sql.SQLFeatureNotSupportedException
at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:681)
at org.apache.calcite.avatica.jdbc.JdbcMeta.connectionSync(JdbcMeta.java:671)
at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:314)
at org.apache.calcite.avatica.remote.Service$ConnectionSyncRequest.accept(Service.java:2001)
at org.apache.calcite.avatica.remote.Service$ConnectionSyncRequest.accept(Service.java:1977)
at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
at org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:124)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLFeatureNotSupportedException
at org.apache.phoenix.jdbc.PhoenixConnection.setCatalog(PhoenixConnection.java:799)
at org.apache.calcite.avatica.jdbc.JdbcMeta.apply(JdbcMeta.java:652)
at org.apache.calcite.avatica.jdbc.JdbcMeta.connectionSync(JdbcMeta.java:666)
... 15 more
at org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2453)
at org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:61)
at org.apache.calcite.avatica.remote.ProtobufService.apply(ProtobufService.java:89)
at org.apache.calcite.avatica.remote.RemoteMeta$5.call(RemoteMeta.java:148)
at org.apache.calcite.avatica.remote.RemoteMeta$5.call(RemoteMeta.java:134)
at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:715)
at org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:133)
at org.apache.calcite.avatica.AvaticaConnection.sync(AvaticaConnection.java:664)
at org.apache.calcite.avatica.AvaticaConnection.getAutoCommit(AvaticaConnection.java:181)
at com.onseven.dbvis.g.B.C.ā(Z:1315)
at com.onseven.dbvis.g.B.F$A.call(Z:1369)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Through CLI, i am able to connect : p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #5330e1}
span.s1 {font-variant-ligatures: no-common-ligatures} [cloudbreak@ip-172-40-1-169 bin]$ ./sqlline-thin.py Setting property: [incremental, false] Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF none none org.apache.phoenix.queryserver.client.Driver Connecting to jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF Triple checked I am loading the correct driver. Anything else I could be missing?
... View more
10-12-2016
02:43 PM
@santoshsb They are already configured to point to the nameservice URI. See my first screenshot: hive --service metatool -listFSRootListing FS Roots. hdfs://cluster1/apps/hive/warehouse/test2.d hdfs://cluster1/apps/hive/warehouse/raw.db hdfs://cluster1/apps/hive/warehouse/test.db hdfs://cluster1/apps/hive/warehouse hdfs://cluster1/apps/hive/warehouse/lookup.db
... View more
08-20-2018
09:41 AM
You can refer below link. It will help in troubleshooting Knox issues. https://community.hortonworks.com/articles/113013/how-to-troubleshoot-and-application-behind-apache.html
... View more
09-27-2016
06:23 PM
3 Kudos
Have you granted that user the global policy to "view the ui" from the policies section in the top-right menu?
... View more
09-25-2016
10:08 AM
1 Kudo
Hello @mliem You almost got it right. The missing piece is the ACL param for YARNUI service. So in your Knox topology, the authorization provider should look like this: <provider>
<role>authorization</role>
<name>AclsAuthz</name>
<enabled>true</enabled>
<param name="knox.acl" value="*;knox;*"/>
<param name="yarnui.acl" value="*;knox;*"/>
</provider> Hope this helps. Do let us know the results.
... View more
05-15-2017
09:53 AM
@Sebastian Carroll I have try to use nifi-toolkit-1.0.0, but it is the same with 1.2.0. I will try it on the linux again.
... View more
03-01-2019
07:09 AM
Thank you for your help.
... View more
09-15-2016
12:43 PM
"Any recommendations around minimum aws instance sizes to satisfy its requirements?" It is hardly depends on for example your cluster size and load. I suggest to ask Ambari experts, they know much more about Ambari server system requirements.
... View more
09-14-2016
01:58 PM
Hi, I've been testing(and failing) with changing the the Cluster OS images to RHEL 6.7 since we have a support agreement with RedHat. I was using the hack that I found on this forum where you change the etc/aws-images.yml. It sounds like you I am going in the wrong direction here. Thanks
... View more
01-11-2019
02:36 PM
@Constantin Stanca Hi, could you please why there could be a split-brain situation when the number of zookeeper nodes is even? Thanks~
... View more
12-21-2016
11:57 AM
this should help : https://community.hortonworks.com/questions/64771/unable-to-updateexecute-processor-though-nifi-rest.html
... View more
07-10-2017
05:11 PM
Is this article still valid for HDF version 3.0 which was released recently? Are there easier ways of deploying to Amazon?
... View more
09-26-2016
09:05 PM
The Nifi team has identified and issue with Hive scripts causing this processor to hang. Basically these hive commands are running Mapreduce or Tez jobs that are producing a lot of standard out which is being returned to the NiFi processor. If the amount of stdout or sterr returned gets large the processor can hang. To prevent this from happening, we recommend adding the “-S” option to hive commands or “—silent=true” to beeline commands that are executed using the NiFi script processors.
... View more
07-08-2016
01:14 PM
@mclark Great suggestion, thanks! Will definitely take a look at incorporating invokeHTTP.
... View more
06-29-2016
06:29 PM
Ah, thanks for the background! Whether or not you will have issues on fetching, replacing, and putting will depend on how big the file is, and how often this logic is happening. If it is 1200 iteration each time the logic runs and the logic runs every 24 hours then you will be fine. If it runs every 1 second then you may be hurting with this solution 🙂 A possibly better approach may be to use the ExecuteScript processor and write a simple Groovy or Python script that updates the file for you. Or you could potentially look at implementing some logic that would allow you to update the file only once at the end, if possible, instead of every one of the 1200 iterations, etc.
... View more
08-18-2017
01:09 PM
Hi mliem,can you share screenshot of the processor(properties)
... View more