Member since
06-24-2016
5
Posts
0
Kudos Received
0
Solutions
10-07-2024
10:12 PM
Hello: How to use HAProxy to connect for Kafka with Kerberos authentication? I have three kafka brokers, and i try to use haproxy in front of kafka, but kerberos authenticated failed My haproxy.conf listen kafka bind *:6677 mode tcp balance roundrobin server kafka1 kafka-1.kafka.net:6668 check server kafka2 kafka-2.kafka.net:6669 check server kafka3 kafka-3.kafka.net:6666 check I also modified kafka1 server.properties advertised.listeners=INTERNAL://:6667,LB://gateway.kafka.net:6668 listeners=INTERNAL://:6667,LB://:6668 listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,LB:SASL_PLAINTEXT inter.broker.listener.name=INTERNAL listener.name.LB.gssapi.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required doNotPrompt=true useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka.service.keytab"principal="kafka/gateway.kafka.net@KAFKA.NET" kafka2 server.properties advertised.listeners=INTERNAL://:6667,LB://gateway.kafka.net:6669 listeners=INTERNAL://:6667,LB://:6669 listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,LB:SASL_PLAINTEXT inter.broker.listener.name=INTERNAL listener.name.LB.gssapi.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required doNotPrompt=true useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka.service.keytab"principal="kafka/gateway.kafka.net@KAFKA.NET"; kafka3 server.properties advertised.listeners=INTERNAL://:6667,LB://gateway.kafka.net:6666 listeners=INTERNAL://:6667,LB://:6666 listener.security.protocol.map=INTERNAL:SASL_PLAINTEXT,LB:SASL_PLAINTEXT inter.broker.listener.name=INTERNAL listener.name.LB.gssapi.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required doNotPrompt=true useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka.service.keytab"principal="kafka/gateway.kafka.net@KAFKA.NET"; amd use the command /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --topic my-topic --broker-list gateway.kafka.net:6677 --producer-property security.protocol=SASL_PLAINTEXT Will get the error: [2024-10-08 20:07:58,330] ERROR [Producer clientId=console-producer] Connection to node -1 failed authentication due to: Authentication failed due to invalid credentials with SASL mechanism GSSAPI (org.apache.kafka.clients.NetworkClient) [2024-10-08 20:07:58,330] ERROR Error when sending message to topic my-topic5 with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
... View more
Labels:
12-29-2023
12:54 AM
Hi: I use HBASE 2.1.6 version, and enable MOB feature on my tables then I start to input data to my table, after a while, 4 regionservers are all down, and the error log hs_err_pidxxx.log generated in the /var/log/hbase/. after to restart the hbase, regionservers still down, it can not start again I have no idea how to check the hs_err_pidxxx.log #
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007efe5ae4ae40, pid=11890, tid=0x00007efe585d5700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_241-b07) (build 1.8.0_241-b07)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.241-b07 mixed mode linux-amd64 )
# Problematic frame:
# V [libjvm.so+0x649e40] void G1ParScanClosure::do_oop_nv<oopDesc*>(oopDesc**)+0x30
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x00007efe5405d800): GCTaskThread [stack: 0x00007efe584d5000,0x00007efe585d6000] [id=12112]
siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 0x00007efe53916e78
Registers:
RAX=0x00000000ed02a6a0, RBX=0x00007efdb1724bc0, RCX=0x00000000000000ed, RDX=0x00007efe54037810
RSP=0x00007efe585d45e0, RBP=0x00007efe585d45f0, RSI=0x00007efe53916d8b, RDI=0x00007efe585d4dc0
R8 =0x0000000000000000, R9 =0x00000000000000a8, R10=0x0000000000000001, R11=0x00007efe5b9733b0
R12=0x00007efdb1724ba8, R13=0x00007ef3ac13f2c8, R14=0x00007ef3ac13f048, R15=0x00007efdb1724bc0
RIP=0x00007efe5ae4ae40, EFLAGS=0x0000000000010206, CSGSFS=0x002b000000000033, ERR=0x0000000000000004
TRAPNO=0x000000000000000e
Top of Stack: (sp=0x00007efe585d45e0)
0x00007efe585d45e0: 00007efe585d4dc0 00007efdb1724ba8
0x00007efe585d45f0: 00007efe585d4640 00007efe5ae48427
0x00007efe585d4600: 00007ef3ac13f2c8 00007efdb1724b98
0x00007efe585d4610: 00007efe585d4dc0 00007efe21ffbca8
0x00007efe585d4620: 00007efe585d4dc0 00007efdb1724b98
0x00007efe585d4630: 00007ef3ac13f048 00007efe585d4c50
0x00007efe585d4640: 00007efe585d4680 00007efe5ae505ff
0x00007efe585d4650: 00007efe585d4670 00007efe21ffbca8
0x00007efe585d4660: 00007efdb1724b98 0000000000000016
................
Register to memory mapping:
RAX=0x00000000ed02a6a0 is an unknown value
RBX=0x00007efdb1724bc0 is pointing into object: 0x00007efdb1724b98
java.lang.Class
- klass: 'java/lang/Class'
RCX=0x00000000000000ed is an unknown value
RDX=0x00007efe54037810 is an unknown value
RSP=0x00007efe585d45e0 is an unknown value
RBP=0x00007efe585d45f0 is an unknown value
RSI=0x00007efe53916d8b is an unknown value
RDI=0x00007efe585d4dc0 is an unknown value
R8 =0x0000000000000000 is an unknown value
R9 =0x00000000000000a8 is an unknown value
R10=0x0000000000000001 is an unknown value
R11=0x00007efe5b9733b0: <offset 0x1823b0> in /lib64/libc.so.6 at 0x00007efe5b7f1000
R12=0x00007efdb1724ba8 is pointing into object: 0x00007efdb1724b98
java.lang.Class
- klass: 'java/lang/Class'
R13=0x00007ef3ac13f2c8 is pointing into metadata
R14=0x00007ef3ac13f048 is pointing into metadata
R15=0x00007efdb1724bc0 is pointing into object: 0x00007efdb1724b98
java.lang.Class
- klass: 'java/lang/Class'
Stack: [0x00007efe584d5000,0x00007efe585d6000], sp=0x00007efe585d45e0, free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x649e40] void G1ParScanClosure::do_oop_nv<oopDesc*>(oopDesc**)+0x30
V [libjvm.so+0x647427] InstanceKlass::oop_oop_iterate_backwards_nv(oopDesc*, G1ParScanClosure*)+0x167
V [libjvm.so+0x64f5ff] InstanceMirrorKlass::oop_oop_iterate_backwards_nv(oopDesc*, G1ParScanClosure*)+0x1f
V [libjvm.so+0x5b14e9] G1ParScanThreadState::copy_to_survivor_space(InCSetState, oopDesc*, markOopDesc*)+0x569
V [libjvm.so+0x5b1e53] G1ParScanThreadState::trim_queue()+0x643
V [libjvm.so+0x58aefb] G1ParEvacuateFollowersClosure::do_void()+0x26b
V [libjvm.so+0x59b51c] G1ParTask::work(unsigned int)+0x44c
V [libjvm.so+0xaf9918] GangWorker::loop()+0xd8
V [libjvm.so+0x90f542] java_start(Thread*)+0x102
................
0x00007efe55d69800 JavaThread "C2 CompilerThread0" daemon [_thread_blocked, id=12233, stack(0x00007ef3a5424000,0x00007ef3a5525000)]
0x00007efe55d68000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=12232, stack(0x00007ef3a5525000,0x00007ef3a5626000)]
0x00007efe55d63000 JavaThread "Surrogate Locker Thread (Concurrent GC)" daemon [_thread_blocked, id=12231, stack(0x00007ef3a5626000,0x00007ef3a5727000)]
0x00007efe55d35000 JavaThread "Finalizer" daemon [_thread_blocked, id=12227, stack(0x00007ef3abc4e000,0x00007ef3abd4f000)]
0x00007efe55d30000 JavaThread "Reference Handler" daemon [_thread_blocked, id=12226, stack(0x00007ef3abd4f000,0x00007ef3abe50000)]
0x00007efe54023000 JavaThread "main" [_thread_blocked, id=12093, stack(0x00007efe5c2f7000,0x00007efe5c3f8000)]
Other Threads:
0x00007efe55d26800 VMThread [stack: 0x00007ef3abe50000,0x00007ef3abf51000] [id=12225]
0x00007efe55dad800 WatcherThread [stack: 0x00007ef3a4111000,0x00007ef3a4212000] [id=12254]
=>0x00007efe5405d800 (exited) GCTaskThread [stack: 0x00007efe584d5000,0x00007efe585d6000] [id=12112]
VM state:at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event])
[0x00007efe54021810] Threads_lock - owner thread: 0x00007efe55d26800
[0x00007efe54021d10] Heap_lock - owner thread: 0x00000000021c1800
Heap:
garbage-first heap total 41943040K, used 2107094K [0x00007ef435000000, 0x00007ef436005000, 0x00007efe35000000)
region size 16384K, 128 young (2097152K), 4 survivors (65536K)
Metaspace used 96765K, capacity 99604K, committed 99840K, reserved 100352K
Heap Regions: (Y=young(eden), SU=young(survivor), HS=humongous(starts), HC=humongous(continues), CS=collection set, F=free, TS=gc time stamp, PTAMS=previous top-at-mark-start, NTAMS=next top-at-mark-start)
AC 0 HS TS 0 PTAMS 0x00007ef435800018 NTAMS 0x00007ef435800018 space 16384K, 50% used [0x00007ef435000000, 0x00007ef436000000)
AC 0 HS TS 0 PTAMS 0x00007ef436800018 NTAMS 0x00007ef436800018 space 16384K, 50% used [0x00007ef436000000, 0x00007ef437000000)
AC 0 F TS 0 PTAMS 0x00007ef437000000 NTAMS 0x00007ef437000000 space 16384K, 0% used [0x00007ef437000000, 0x00007ef438000000)
AC 0 F TS 0 PTAMS 0x00007ef438000000 NTAMS 0x00007ef438000000 space 16384K, 0% used [0x00007ef438000000, 0x00007ef439000000)
... View more
Labels:
- Labels:
-
Apache HBase
03-16-2022
05:50 PM
I try to change the composite-user-group-provider to file-user-group-provider, and it worked! authorizers.xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Initial User Identity 1">CN=nifi, OU=NIFI</property>
<property name="Initial User Identity 2">nifi</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">file-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">nifi</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">CN=nifi, OU=NIFI</property>
<property name="Node Identity 2">nifi</property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers> But I still have some questions, if ldap-user-group-provider or composite-user-group-provider can not used for secure cluster ?
... View more
03-16-2022
12:10 AM
I need help in Apache NIFI secure cluster configuration. My purpose is to create Nifi secure cluster and use ldap to manage login accounts and policies. At the first I only use ldap-user-group-provider, but it can not worked. The error message in Web UI is Insufficient Permissions
Untrusted proxy CN=nifi, OU=NIFI Then I modified my authorizers.xml to composite-user-group-provider to the following post refferenced web logs. The error message changed to Unable to locate node CN=nifi, OU=NIFI to seed policies Me deployed steps is: I use nifi-toolkit.sh to generate certificates, command: ./nifi-toolkit-1.15.3/bin/tls-toolkit.sh standalone -C "CN=nifi, OU=NIFI" -n 'nifi' -o /root/target My authorizers.xml <authorizers>
<userGroupProvider>
<identifier>file-user-group-provider</identifier>
<class>org.apache.nifi.authorization.FileUserGroupProvider</class>
<property name="Users File">./conf/users.xml</property>
<property name="Legacy Authorized Users File"/>
<property name="Initial User Identity 1">CN=nifi, OU=NIFI</property>
</userGroupProvider>
<userGroupProvider>
<identifier>ldap-user-group-provider</identifier>
<class>org.apache.nifi.ldap.tenants.LdapUserGroupProvider</class>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">cn=Manager,dc=nifi,dc=data</property>
<property name="Manager Password">xxxx</property>
<property name="TLS - Keystore"/>
<property name="TLS - Keystore Password"/>
<property name="TLS - Keystore Type"/>
<property name="TLS - Truststore"/>
<property name="TLS - Truststore Password"/>
<property name="TLS - Truststore Type"/>
<property name="TLS - Client Auth"/>
<property name="TLS - Protocol"/>
<property name="TLS - Shutdown Gracefully"/>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://ldap:789</property>
<property name="Page Size"/>
<property name="Sync Interval">1 mins</property>
<property name="Group Membership - Enforce Case Sensitivity">false</property>
<property name="User Search Base">ou=users,dc=nifi,dc=data</property>
<property name="User Object Class">person</property>
<property name="User Search Scope">ONE_LEVEL</property>
<property name="User Search Filter"/>
<property name="User Identity Attribute">uid</property>
<property name="User Group Name Attribute"/>
<property name="User Group Name Attribute - Referenced Group Attribute"/>
<property name="Group Search Base"/>
<property name="Group Object Class">group</property>
<property name="Group Search Scope">ONE_LEVEL</property>
<property name="Group Search Filter"/>
<property name="Group Name Attribute"/>
<property name="Group Member Attribute"/>
<property name="Group Member Attribute - Referenced User Attribute"/>
</userGroupProvider>
<userGroupProvider>
<identifier>composite-user-group-provider</identifier>
<class>org.apache.nifi.authorization.CompositeUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider</property>
</userGroupProvider>
<accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">composite-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">nifi</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">CN=nifi, OU=NIFI</property>
</accessPolicyProvider>
<authorizer>
<identifier>managed-authorizer</identifier>
<class>org.apache.nifi.authorization.StandardManagedAuthorizer</class>
<property name="Access Policy Provider">file-access-policy-provider</property>
</authorizer>
</authorizers> Before restart the Nifi service, I already deleted authorizations.xml and users.xml files. In restart process, I found the users.xml generated, and the content is <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="59486998-e3ac-3150-a4bc-c00e5a9959ba"
identity="CN=nifi, OU=NIFI"/>
</users>
</tenants> But Nifi start failed, from the error message is Unable to locate node CN=nifi, OU=NIFI to seed policies Nifi version is 1.15.3 Please, can anyone tell me whats wrong? Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
06-24-2016
07:14 AM
hi : I followed your steps to setup OOZIE HA with kerberos environment But my ambari GUI will have two alerts about Oozie Server Status, l <pre> Execution of 'source /usr/hdp/current/oozie-server/conf/oozie-env.sh ; oozie admin -oozie http://oozie-server1:11000/oozie -status' returned 255. Error: IO_ERROR : java.io.IOException: Error while connecting Oozie server. No of retries = 1. Exception = Could not authenticate, Authentication failed, status: 403, message: Forbidden </pre> I use the command 'source /usr/hdp/current/oozie-server/conf/oozie-env.sh ; oozie admin -oozie http://oozie-server1:11000/oozie -status' to run on the physical node, it failed. but I change the oozie server to my load balancer hostname, 'source /usr/hdp/current/oozie-server/conf/oozie-env.sh ; oozie admin -oozie http://loadbalancer.net:11000/oozie -status' It will display result : 'System mode: NORMAL' I think this is right. do you meet the question? why ambari do not catch my load balancer hostname, it is still use original oozie server node to check the service? thanks.
... View more