Member since
06-13-2016
76
Posts
13
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2057 | 08-09-2017 06:54 PM | |
2955 | 05-03-2017 02:25 PM | |
4105 | 03-28-2017 01:56 PM | |
4269 | 09-26-2016 09:05 PM | |
2888 | 09-22-2016 03:49 AM |
03-28-2017
01:58 PM
@dvt isoft Are the tables you've created via ODBC showing up in beeline? What about if you try creating a database instead.
... View more
03-28-2017
01:56 PM
@ARUN Hi, HDFS permissions is managed by a combination of ranger + native HDFS permissions (POSIX). Just because you've set ranger policies for those 3 users, doesnt mean they are the only users who are allowed to access HDFS. In your case, arun is still able to access hdfs because all folders in HDFS have 'r' access for others (eg. /tmp - drwxrwxrwx) The link below has best practicess in managing HDFS permissions with ranger and native hadoop permissions: https://hortonworks.com/blog/best-practices-in-hdfs-authorization-with-apache-ranger/ One of the important steps is to change HDFS umask to 077 from 022. This will prevent any new files or folders to be accessed by anyone other than the owner. As an example you can do the below: As hdfs user: 1. hdfs dfs -mkdir /tmp/ranger_test 2 hdfs dfs -chmod 700 /tmp/ranger_test (folder permission becomes "drwx------" - changing umask to 077 will do this for future files) 3. switch to ARUN user 4. hdfs dfs -ls /tmp/ranger_test (you should get an error along the lines of: " ls: Permission denied: user=arun, access=READ_EXECUTE, inode="/tmp/ranger_test":hdfs:hdfs:drwx------" 5. Add a policy in ranger to allow arun access to /tmp/ranger_test 6. try to access the /tmp/ranger_test folder with arun Hope this helps,
... View more
03-28-2017
01:32 PM
@dvt isoft Hi, Are you able to access hive through command line? If so, I would look to see if your table is showing up there. I would also double check the database is selected correctly. Are you creating it in default database or creating your own database and a table within it. Thanks
... View more
03-28-2017
01:27 PM
@Anishkumar Valsalam I usually test nifi cluster functions by setting up a very simple flow such as: GenerateFlowFile -> CompressContent -> UpdateAttribute This involves high rate flow file generation, CPU usage, and provenance emission. It will give you a certain level of knowledge on the health of your system These links are also very useful in determining throughput expectations http://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.2/bk_Overview/content/performance-expectations-and-characteristics-of-nifi.html http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.0/bk_dataflow-command-line-installation/content/hdf_isg_hardware.html
... View more
02-10-2017
08:25 PM
@Matt Clarke Thanks Matt, very useful info. It was about 20 tar files, which turned into almost 1000 individual files that I was looking to ZIP back to 20 files. Looks like the major problem was the bin #. It was set to 1, once I increased that it had no problem with the multiple tar files that were queued up. I only had 1 concurrent tasks so I was surprised that even with 1 bin, it would look to create a new bin. For selected prioritizers it was the default " first in first out", so if its untaring one tar file at a time it should finish a whole bin before moving to the next one.
... View more
02-09-2017
06:00 AM
Hi I have the processors unpackContent -> MergeContent. I use this to untar a file and then zip the files. I am using the defragment merge strategy and have been noticing that when MergeContent has to handle many flowfiles at once (flowfile queue builds up before MergeContent) from many different fragments I get "Expected number of fragments is X but only getting Y". Simply routing failures back to merge content or creating a run schedule delay helped solve this but wondering why this would be happening. Thanks,
... View more
Labels:
- Labels:
-
Apache NiFi
01-26-2017
06:55 AM
No but all ports are open between the two machines. When I run phoenix-client (which works), i use the same node NiFI is running on so i don't think its a connection issue: Works from nifi node: /usr/hdp/current/phoenix-client/bin/sqlline.py zk:2181:/hbase-secure:hbase-dev-hadoop@DEV.COM:/etc/security/keytabs/hbase.headless.keytab
... View more
01-26-2017
06:45 AM
1 Kudo
Hello, I have secured HDP cluster and a nifi flow that ingests JSON and inserts it into a Phoenix Table Getfile->ConvertJsonToSQL->ReplaceText(insert to update)->PutSQL. Prior to enabling kerberos the flow was working fine. After enabling kerberos, I changed the connection pool to: jdbc:phoenix:localhost:2181:/hbase-secure:hbase-dev@DEV.COM:/etc/security/keytabs/hbase.headless.keytab This connection URL works fine with sqlline/phoenix client Now when I start the flow, it initially hangs for a while and I end up with the below logs Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Thu Jan 26 06:36:52 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68021: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=ip-123-431-1-123.ec2.internal,16020,1485391803237, seqNum=0
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549) ~[na:na]
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388) ~[na:na]
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044) ~[na:na]
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:231) ~[na:na]
... 18 common frames omitted
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68130: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=ip-172-40-1-51.ec2.internal,16020,1485391803237, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) ~[na:na]
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) ~[na:na]
... 3 common frames omitted
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to ip-172-40-1-51.ec2.internal/172.40.1.51:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to ip-172-40-1-51.ec2.internal/172.40.1.51:16020 is closing. Call id=9, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1258) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1229) ~[na:na]
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) ~[na:na]
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) ~[na:na]
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32741) ~[na:na]
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:373) ~[na:na]
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:200) ~[na:na]
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62) ~[na:na]
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) ~[na:na]
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364) ~[na:na]
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338) ~[na:na]
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ~[na:na]
... 4 common frames omitted
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to ip-172-40-1-51.ec2.internal/172.40.1.51:16020 is closing. Call id=9, waitTime=3
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846) ~[na:na]
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574) ~[na:na]
Any ideas on what I could be missing? Similar behaviour with executeSQL. Thanks!
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Phoenix
11-25-2016
05:43 AM
@Sunile Manjee Followed the above and getting: An error occurred while establishing the connection:
Long Message:
Remote driver error: RuntimeException: java.sql.SQLFeatureNotSupportedException -> SQLFeatureNotSupportedException: (null exception message)
Details:
Type: org.apache.calcite.avatica.AvaticaClientRuntimeException
Stack Trace:
AvaticaClientRuntimeException: Remote driver error: RuntimeException: java.sql.SQLFeatureNotSupportedException -> SQLFeatureNotSupportedException: (null exception message). Error -1 (00000) null
java.lang.RuntimeException: java.sql.SQLFeatureNotSupportedException
at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:681)
at org.apache.calcite.avatica.jdbc.JdbcMeta.connectionSync(JdbcMeta.java:671)
at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:314)
at org.apache.calcite.avatica.remote.Service$ConnectionSyncRequest.accept(Service.java:2001)
at org.apache.calcite.avatica.remote.Service$ConnectionSyncRequest.accept(Service.java:1977)
at org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:95)
at org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:124)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLFeatureNotSupportedException
at org.apache.phoenix.jdbc.PhoenixConnection.setCatalog(PhoenixConnection.java:799)
at org.apache.calcite.avatica.jdbc.JdbcMeta.apply(JdbcMeta.java:652)
at org.apache.calcite.avatica.jdbc.JdbcMeta.connectionSync(JdbcMeta.java:666)
... 15 more
at org.apache.calcite.avatica.remote.Service$ErrorResponse.toException(Service.java:2453)
at org.apache.calcite.avatica.remote.RemoteProtobufService._apply(RemoteProtobufService.java:61)
at org.apache.calcite.avatica.remote.ProtobufService.apply(ProtobufService.java:89)
at org.apache.calcite.avatica.remote.RemoteMeta$5.call(RemoteMeta.java:148)
at org.apache.calcite.avatica.remote.RemoteMeta$5.call(RemoteMeta.java:134)
at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:715)
at org.apache.calcite.avatica.remote.RemoteMeta.connectionSync(RemoteMeta.java:133)
at org.apache.calcite.avatica.AvaticaConnection.sync(AvaticaConnection.java:664)
at org.apache.calcite.avatica.AvaticaConnection.getAutoCommit(AvaticaConnection.java:181)
at com.onseven.dbvis.g.B.C.ā(Z:1315)
at com.onseven.dbvis.g.B.F$A.call(Z:1369)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Through CLI, i am able to connect : p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #5330e1}
span.s1 {font-variant-ligatures: no-common-ligatures} [cloudbreak@ip-172-40-1-169 bin]$ ./sqlline-thin.py Setting property: [incremental, false] Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF none none org.apache.phoenix.queryserver.client.Driver Connecting to jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF Triple checked I am loading the correct driver. Anything else I could be missing?
... View more
10-31-2016
04:40 PM
@Gerg Git No I did not, I ended up using a different LDAP server freeipa which has been proven to integrate with kerberos and knox nicely. I was using openldap, cloudbreak and amazon linux servers on HDP 2.5. I suspect its something related to that or the way I had installed kerberos. What are you using?
... View more