Member since
10-03-2022
17
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
316 | 08-22-2024 02:15 AM | |
861 | 05-27-2024 05:11 AM | |
969 | 11-20-2023 01:12 AM |
08-27-2024
08:44 AM
I deleted the open transactions from the Oracle db. After restarting hive unfortunately I still have the same problems. There are no error messages from the logs and the tables are not locked. INFO org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: Starting cleaning for id:5365402,dbname:XXXX,tableName:XXXX,partName:schema_sorgente=XXXX,state:,type:MAJOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:826,errorMessage:null,workerId: null,initiatorId: null 2024-08-27 14:26:53,877 WARN org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: id=5365402 Remained 21 obsolete directories from hdfs://XXXX. [base_0000201_v1772045,base_0000014_v1403023,delta_0000002_0000002_0000,delete_delta_0000003_0000003_0000,delta_0000003_0000003_0000,delta_0000004_0000004_0000,delete_delta_0000007_0000007_0000,delta_0000007_0000007_0000,delta_0000008_0000008_0000,delete_delta_0000011_0000011_0000,delta_0000011_0000011_0000,delta_0000012_0000012_0000,delete_delta_0000013_0000013_0000,delta_0000013_0000013_0000,delta_0000014_0000014_0000,delete_delta_0000200_0000200_0000,delta_0000200_0000200_0000,delta_0000201_0000201_0000,delete_delta_0000498_0000498_0000,delta_0000498_0000498_0000,delta_0000499_0000499_0000] 2024-08-27 14:26:53,877 WARN org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: No files were removed. Leaving queue entry id:5365402,dbname:XXXX,tableName:XXXX,partName:schema_sorgente=XXXX,state:,type:MAJOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:826,errorMessage:null,workerId: null,initiatorId: null in ready for cleaning state.
... View more
08-22-2024
06:29 AM
Hi, actually launching the show transactions command I see open transactions from June 13th. I tried to do abort transactions "id" but I receive the following error: Error while compiling statement: FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.ddl.DDLTask. org.apache.thrift.TApplicationException: Internal error processing abort_txns INFO : Completed compiling command(queryId=hive_20240822152836_9e17b9ba-4e7b-44d1-a710-8e086edd8da0); Time taken: 0.001 seconds INFO : Executing command(queryId=hive_20240822152836_9e17b9ba-4e7b-44d1-a710-8e086edd8da0): abort transactions 13422 INFO : Starting task [Stage-0:DDL] in serial mode ERROR : Failed org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing abort_txns at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5549) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.ddl.process.abort.AbortTransactionsOperation.execute(AbortTransactionsOperation.java:35) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:82) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:785) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:524) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:518) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:234) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at java.security.AccessController.doPrivileged(Native Method) ~[?:?] at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.1.1.7.1.9.4-4.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:354) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:829) ~[?:?] Caused by: org.apache.thrift.TApplicationException: Internal error processing abort_txns at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_abort_txns(ThriftHiveMetastore.java:5929) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.abort_txns(ThriftHiveMetastore.java:5916) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.abortTxns(HiveMetaStoreClient.java:3445) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?] at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3759) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?] at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5546) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] ... 26 more ERROR : DDLTask failed, DDL Operation: class org.apache.hadoop.hive.ql.ddl.process.abort.AbortTransactionsOperation org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: Internal error processing abort_txns at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5549) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.ddl.process.abort.AbortTransactionsOperation.execute(AbortTransactionsOperation.java:35) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:82) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:785) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:524) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:518) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:234) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:334) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at java.security.AccessController.doPrivileged(Native Method) ~[?:?] at javax.security.auth.Subject.doAs(Subject.java:423) ~[?:?] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) ~[hadoop-common-3.1.1.7.1.9.4-4.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:354) ~[hive-service-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:829) ~[?:?] Caused by: org.apache.thrift.TApplicationException: Internal error processing abort_txns at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_abort_txns(ThriftHiveMetastore.java:5929) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.abort_txns(ThriftHiveMetastore.java:5916) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.abortTxns(HiveMetaStoreClient.java:3445) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:213) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?] at jdk.internal.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:3759) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] at com.sun.proxy.$Proxy55.abortTxns(Unknown Source) ~[?:?] at org.apache.hadoop.hive.ql.metadata.Hive.abortTransactions(Hive.java:5546) ~[hive-exec-3.1.3000.7.1.9.4-4.jar:3.1.3000.7.1.9.4-4] ... 26 more ERROR : FAILED: Execution Error, return code 40000 from org.apache.hadoop.hive.ql.ddl.DDLTask. org.apache.thrift.TApplicationException: Internal error processing abort_txns INFO : Completed executing command(queryId=hive_20240822152836_9e17b9ba-4e7b-44d1-a710-8e086edd8da0); Time taken: 1.018 seconds INFO : OK Any suggestion? Thanks, Lorenzo
... View more
08-22-2024
02:15 AM
Hi all, I solved sending emails using this configuration: Lorenzo
... View more
08-22-2024
02:11 AM
Hi all, in my test cluster I am noticing a slowdown in the execution of acid queries. Analyzing in detail I noticed that the compactions remain stuck at "ready for cleaning" and there are many delta files. I also tried to manually launch the compaction without any result. hive.metastore.housekeeping.threads.on and hive.metastore.housekeeping.threads.on is true only in 1 hive metastore host. This is a table properties: bucketing_version 2 transactional true transactional_properties default transient_lastDdlTime 1720453037 In the development cluster with the identical configuration I do not have this problem. Do you have any suggestions? I'm running in CDP 7.1.9 Thanks, Lorenzo
... View more
Labels:
06-30-2024
05:50 AM
1 Kudo
Hi all, I am trying to integrate sending emails from NiFi using the put email processor. Below is my configuration: When I start the processor I get this error: Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to java.lang.NullPointerException: java.lang.NullPointerException Do you have any advice regarding the configuration? Thanks
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
05-27-2024
05:11 AM
2 Kudos
@MattWho To authenticate to the web ui in NiFi i use the ldap credentials (myuser). For Kerberos authentication via shell I use myuser@REALM. After setting the following parameters in nifi: nifi.security.identity.mapping.pattern.kerb=^(.*?)(?:@.*?)$
nifi.security.identity.mapping.value.kerb=$1
nifi.security.identity.mapping.transform.kerb=NONE Now the token via kerberos works and I no longer get permission errors. Thanks! Lorenzo
... View more
05-08-2024
05:52 AM
1 Kudo
I temporarily solved it by eliminating the dynamic child creation
... View more
05-08-2024
05:48 AM
Hi everyone, I'm trying to use rest api in a cloudera cluster with ssl and kerberos. I am testing the use of the same by authenticating with a bearer token to gain access to the resource. Below is what is used and working: curl 'https://nifi-node:8443/nifi-api/access/token' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' --data 'username=myuserad&password=mypasswordad' --compressed --cacert /var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem curl -H 'Authorization: Bearer token generated 'Content-Type: application/json' -XPUT -d '{"id":"****","state":"RUNNING"}' https://nifi-node/nifi-api/flow/process-groups/****--cacert /var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem To avoid entering the password in clear text as in curl N.1 I am testing the token generation via Kerberos: curl -X POST --negotiate -u : https://nifi-node:8443/nifi-api/access/kerberos --cacert /var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts .pem Using this mode the token is correctly generated but when I try to execute API N.2 I receive the following error: o.a.n.w.a.c.AccessDeniedExceptionMapper identity[myaduser], groups[] does not have permission to access the requested resource. Unable to view the user interface. Returning Forbidden response. Do you have any advice?
... View more
02-07-2024
02:09 AM
Hello, the ranger version is 2.1.0 and there are no error logs. the ranger-ugsync-site.xml file contains: <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration> <property> <name>ranger.usersync.cookie.enabled</name> <value>true</value> </property> <property> <name>ranger.usersync.enabled</name> <value>true</value> </property> <property> <name>ranger.usersync.filesource.text.delimiter</name> <value>,</value> </property> <property> <name>ranger.usersync.group.memberattributename</name> <value>member</value> </property> <property> <name>ranger.usersync.group.nameattribute</name> <value>cn</value> </property> <property> <name>ranger.usersync.group.objectclass</name> <value>group</value> </property> <property> <name>ranger.usersync.group.searchbase</name> <value>OU=CLOUDERA,OU=APPLICATION GROUPS,OU=GRUPPI,DC=test,DC=test</value> </property> <property> <name>ranger.usersync.group.searchscope</name> <value>sub</value> </property> <property> <name>ranger.usersync.keystore.password</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/altscript.sh sec-0-ranger.usersync.keystore.password</value> </property> <property> <name>ranger.usersync.ldap.binddn</name> <value>CN=clouderabind,OU=CLOUDERA,OU=USER DI SERVIZIO,OU=UTENTI,DC=test,DC=test</value> </property> <property> <name>ranger.usersync.ldap.dtestasync</name> <value>false</value> </property> <property> <name>ranger.usersync.ldap.grouphierarchylevels</name> <value>0</value> </property> <property> <name>ranger.usersync.ldap.groupname.caseconversion</name> <value>lower</value> </property> <property> <name>ranger.usersync.ldap.ldapbindpassword</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/altscript.sh sec-0-ranger.usersync.ldap.ldapbindpassword</value> </property> <property> <name>ranger.usersync.ldap.referral</name> <value>ignore</value> </property> <property> <name>ranger.usersync.ldap.starttls</name> <value>false</value> </property> <property> <name>ranger.usersync.ldap.url</name> <value>ldap://test-dc08.test.test:389</value> </property> <property> <name>ranger.usersync.ldap.user.nameattribute</name> <value>sAMAccountName</value> </property> <property> <name>ranger.usersync.ldap.user.objectclass</name> <value>user</value> </property> <property> <name>ranger.usersync.ldap.user.searchbase</name> <value>OU=UTENTI,DC=test,DC=test</value> </property> <property> <name>ranger.usersync.ldap.user.searchscope</name> <value>sub</value> </property> <property> <name>ranger.usersync.ldap.username.caseconversion</name> <value>lower</value> </property> <property> <name>ranger.usersync.logdir</name> <value>/var/log/ranger/usersync</value> </property> <property> <name>ranger.usersync.metrics.enabled</name> <value>true</value> </property> <property> <name>ranger.usersync.metrics.filename</name> <value>metrics.json</value> </property> <property> <name>ranger.usersync.metrics.filepath</name> <value>/var/log/ranger/metrics-usersync</value> </property> <property> <name>ranger.usersync.metrics.frequencytimeinmillis</name> <value>60000</value> </property> <property> <name>ranger.usersync.pagedresultsenabled</name> <value>true</value> </property> <property> <name>ranger.usersync.pagedresultssize</name> <value>500</value> </property> <property> <name>ranger.usersync.policymanager.maxrecordsperapicall</name> <value>1000</value> </property> <property> <name>ranger.usersync.policymgr.username</name> <value>rangerusersync</value> </property> <property> <name>ranger.usersync.port</name> <value>5151</value> </property> <property> <name>ranger.usersync.role.assignment.list.delimiter</name> <value>&</value> </property> <property> <name>ranger.usersync.sleeptimeinmillisbetweensynccycle</name> <value>60000</value> </property> <property> <name>ranger.usersync.source.impl.class</name> <value>org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder</value> </property> <property> <name>ranger.usersync.truststore.file</name> <value>/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_truststore.jks</value> </property> <property> <name>ranger.usersync.truststore.password</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/altscript.sh sec-0-ranger.usersync.truststore.password</value> </property> <property> <name>ranger.usersync.unix.backend</name> <value>passwd</value> </property> <property> <name>ranger.usersync.unix.minUserId</name> <value>500</value> </property> <property> <name>ranger.usersync.user.searchenabled</name> <value>true</value> </property> <property> <name>ranger.usersync.username.groupname.assignment.list.delimiter</name> <value>,</value> </property> <property> <name>ranger.usersync.users.groups.assignment.list.delimiter</name> <value>:</value> </property> <property> <name>ranger.usersync.kerberos.keytab</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/ranger.keytab</value> </property> <property> <name>ranger.usersync.policymanager.baseURL</name> <value>https://test-clmaster03.test.test:6182</value> </property> <property> <name>ranger.usersync.credstore.filename</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/conf/rangerusersync.jceks</value> </property> <property> <name>ranger.usersync.policymgr.keystore</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/conf/rangerusersync.jceks</value> </property> <property> <name>ranger.usersync.keystore.file</name> <value>/var/run/cloudera-scm-agent/process/1546329977-ranger-RANGER_USERSYNC/conf/unixauthservice.jks</value> </property> <property> <name>ranger.usersync.policymanager.mockrun</name> <value>false</value> </property> <property> <name>ranger.usersync.passwordvalidator.path</name> <value>/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/ranger-usersync/native/pamCredValidator.uexe</value> </property> <property> <name>ranger.usersync.sink.impl.class</name> <value>org.apache.ranger.unixusersync.process.PolicyMgrUserGroupBuilder</value> </property> <property> <name>ranger.usersync.ssl</name> <value>true</value> </property> <property> <name>ranger.usersync.unix.group.file</name> <value>/etc/group</value> </property> <property> <name>ranger.usersync.unix.password.file</name> <value>/etc/passwd</value> </property> <property> <name>ranger.usersync.ldap.bindalias</name> <value>ranger.usersync.ldap.bindalias</value> </property> <property> <name>ranger.usersync.policymgr.alias</name> <value>ranger.usersync.policymgr.password</value> </property> <property> <name>ranger.keystore.file.type</name> <value>jks</value> </property> <property> <name>ranger.truststore.file.type</name> <value>jks</value> </property> <property> <name>xasecure.policymgr.clientssl.keystore.type</name> <value>jks</value> </property> <property> <name>xasecure.policymgr.clientssl.truststore.type</name> <value>jks</value> </property> <property> <name>ranger.usersync.kerberos.principal</name> <value>rangerusersync/_HOST@test.test</value> </property> </configuration> ranger.usersync.ldap.user.searchbase OU=utenti,DC=test,DC=test ranger.usersync.group.searchbase OU=Cloudera, OU=Application Groups,OU=Gruppi, DC=test,DC=test Thanks in advance.
... View more