Member since
08-08-2018
34
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1337 | 12-11-2019 06:37 AM |
12-11-2019
06:37 AM
resolved: port 21050 was blocked by vpn. after creating ssh tunnel connection worked
... View more
12-10-2019
10:35 AM
Hello,
I install impala/kudu on HDP 2.6.2.2:
impalad version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE (build 48f1ad385382cd90dbaed53b174965251d91d088)
With impala-shell i created database "imp_kudu" with single table "test_table"
When I try to open JDBC connection for "ImpalaJDBC4.jar" driver using url:
jdbc:impala://myserver.com:21050/imp_kudu;AuthMech=0
I have the following error
Cloudera][ImpalaJDBCDriver](500164) Error initialized or created transport for authentication: Operation timed out (Connection timed out).
What kind of problem does this indicate and how can I resolve it?
... View more
Labels:
08-22-2019
11:07 AM
Thanks, i didn't realized i am not actually logged at the moment
... View more
08-21-2019
06:30 AM
it will be great to mark this thread as Solved Would be happy to, but don't see this available in "options" - i can only see "mark as read"
... View more
08-19-2019
11:58 AM
--producer-config did the trick for kafka-console-producer.sh...Or changing "max.request.size" directly in producer code. I didn't have to modify consumer settings
... View more
08-13-2019
03:37 PM
I am using HDP-2.6.5.0 with kafka 1.0.0; I have to process large (16M) messages so i set message.max.bytes=18874368replica.fetch.max.bytes = 18874368socket.request.max.bytes =18874368 From Ambary/Kafka configs screen and restarted Kafka services When I try to send 16M messages: /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --topic test < ./big.txt I still have the same error: ERROR Error when sending message to topic test with key: null, value: 16777239 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordTooLargeException: The message is 16777327 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. I tried to set max.request.size in producer.properties file but still have the same error. What am I missing? Thank you,
... View more
Labels:
05-06-2019
10:01 PM
Hello, I see "permission denied" problem in PutHDFS with writing of file "531fb09e-5599-48e2-b627-e558d1b6ba70-list" org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=016a1008-69b2-1e66-6220-26211a37e1a5]: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/data/clustering/temp/.531fb09e-5599-48e2-b627-e558d1b6ba70-list":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2763)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2698)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2582)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) It looks to me as "user=root" is the problem, how can I configure PutHDFS to execute as "user=hdfs"? Thank you,
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
05-03-2019
04:23 PM
how can i transform array of maps into array of values: ["one", "two", "three"] ?
... View more
05-03-2019
02:30 PM
This resulted in error: "message.body is invalid when validated against (.*) because message.body is not supported property"
... View more
05-02-2019
07:36 PM
i have json array like the following: [{"id":"one"}, {"id,"two"}] Any idea how could i "package" it into json "root" like the following: { [{"id":"one"}, {"id,"two"}] } I am trying to use JolttransformJSON but couldn't find any clues how to approach operation like this.... Thank you
... View more
Labels:
- Labels:
-
Apache NiFi