Member since
07-01-2016
15
Posts
3
Kudos Received
0
Solutions
01-30-2017
01:29 PM
Hi, I have an SSD and I want to use them for temporary mapreduce files, I added a parameter in "mapreduce.cluster.local.dir"
in "Custom mapred-site", but how do I know that this is the directory used in Mapreduce.
I turned on debug Log, but there is also no information whether this directory is used. env: HDP-2.5.0.0 Ambari - 2.4.2.0 OS - RHEL 6.7
Sergey.
... View more
Labels:
11-10-2016
05:43 AM
1 Kudo
Hi! I run 2 to spark an option SPARK_MAJOR_VERSION=2 pyspark --master yarn --verbose spark starts, I run the SC and get an error, the field in the table exactly there. not the problem SPARK_MAJOR_VERSION=2 pyspark --master yarn --verbose
SPARK_MAJOR_VERSION is set to 2, using Spark2
Python 2.7.12 |Anaconda custom (64-bit)| (default, Jul 2 2016, 17:42:40)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Using properties file: /usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf
Adding default property: spark.history.kerberos.keytab=none
Adding default property: spark.history.fs.logDirectory=hdfs:///spark2-history/
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.extraLibraryPath=/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
Adding default property: spark.yarn.queue=default
Adding default property: spark.yarn.historyServer.address=en-002.msk.mts.ru:18081
Adding default property: spark.history.kerberos.principal=none
Adding default property: spark.history.provider=org.apache.spark.deploy.history.FsHistoryProvider
Adding default property: spark.executor.extraLibraryPath=/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
Adding default property: spark.eventLog.dir=hdfs:///spark2-history/
Adding default property: spark.history.ui.port=18081
Parsed arguments:
master yarn
deployMode null
executorMemory null
executorCores null
totalExecutorCores null
propertiesFile /usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf
driverMemory null
driverCores null
driverExtraClassPath null
driverExtraLibraryPath /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
driverExtraJavaOptions null
supervise false
queue null
numExecutors null
files null
pyFiles null
archives null
mainClass null
primaryResource pyspark-shell
name PySparkShell
childArgs []
jars null
packages null
packagesExclusions null
repositories null
verbose true
Spark properties used, including those specified through
--conf and those from the properties file /usr/hdp/current/spark2-historyserver/conf/spark-defaults.conf:
spark.yarn.queue -> default
spark.history.kerberos.principal -> none
spark.executor.extraLibraryPath -> /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.driver.extraLibraryPath -> /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.eventLog.enabled -> true
spark.yarn.historyServer.address -> en-002.msk.mts.ru:18081
spark.history.ui.port -> 18081
spark.history.provider -> org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory -> hdfs:///spark2-history/
spark.history.kerberos.keytab -> none
spark.eventLog.dir -> hdfs:///spark2-history/
Main class:
org.apache.spark.api.python.PythonGatewayServer
Arguments:
System properties:
spark.yarn.queue -> default
spark.history.kerberos.principal -> none
spark.executor.extraLibraryPath -> /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.driver.extraLibraryPath -> /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.yarn.historyServer.address -> en-002.msk.mts.ru:18081
spark.eventLog.enabled -> true
spark.history.ui.port -> 18081
SPARK_SUBMIT -> true
spark.history.provider -> org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory -> hdfs:///spark2-history/
spark.app.name -> PySparkShell
spark.history.kerberos.keytab -> none
spark.submit.deployMode -> client
spark.eventLog.dir -> hdfs:///spark2-history/
spark.master -> yarn
spark.yarn.isPython -> true
Classpath elements:
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.0.0.2.5.0.0-1245
/_/
Using Python version 2.7.12 (default, Jul 2 2016 17:42:40)
SparkSession available as 'spark'.
>>>
>>>
>>> ds = sqlContext.table('default.geo').limit(100000)
>>> ds.groupby('id').count().show(10)
[Stage 0:==========================================> (5 + 2) / 7]16/11/09 18:11:56 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 7, wn-019): java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
traceback. ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.5.0.0-1245/spark2/python/pyspark/sql/dataframe.py", line 287, in show
print(self._jdf.showString(n, truncate))
File "/usr/hdp/2.5.0.0-1245/spark2/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", line 933, in __call__
File "/usr/hdp/2.5.0.0-1245/spark2/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/hdp/2.5.0.0-1245/spark2/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o45.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 10, wn-029): java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
my ENV OS: RHEL 6.5 HDP: 2.5.0.0 SPARK: 2.0 Python: 2.7 on Anaconda
... View more
Labels:
10-14-2016
08:41 AM
Hi. whether it is possible to include nadoop federation between two clusters, if HA is enabled namenode. if that is possible to do this. dfs.nameservices - cluster1ha dfs.nameservices - cluster2ha
... View more
Labels:
10-10-2016
07:36 AM
Hi @David Streever XXX user has both namenodes and edgenode, but he was not in the group "hadoop" <property>
<name>security.client.protocol.acl</name>
<value>*</value>
</property>
... View more
10-07-2016
02:22 PM
1 Kudo
I run a java user XXX
cat DDL.sql | hive, but in the process I get the error. [XXX@host-001 db]$ cat DDL.sql | hive
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=YYY, access=WRITE, inode="/user/XXX/databases/data_tracing_rep/db_report/filedate=2016-09-07/r=60":XXX:hdfs:drwxr-xr-x where does user YYY? in ENV not it.
... View more
- Tags:
- Data Processing
- Hive
Labels:
10-06-2016
01:15 PM
1 Kudo
Hi, is it possible to use LDAP in ambari without synchronization? only use password authentication to LDAP
... View more
Labels:
07-07-2016
10:38 AM
not working, config + usersync log 07 Jul 2016 13:31:17 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder created
07 Jul 2016 13:31:17 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
07 Jul 2016 13:31:17 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
07 Jul 2016 13:31:17 INFO UserGroupSync [UnixUserSyncThread] - initializing source: org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder
07 Jul 2016 13:31:17 INFO UserGroupSync [UnixUserSyncThread] - Begin: initial load of user/group from source==>sink
07 Jul 2016 13:31:17 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder updateSink started
07 Jul 2016 13:31:17 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
07 Jul 2016 13:31:17 DEBUG AbstractJavaKeyStoreProvider [UnixUserSyncThread] - backing jks path initialized to file:/usr/hdp/current/ranger-usersync/conf/ugsync.jceks
07 Jul 2016 13:31:18 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with -- ldapUrl: ldap://xxx.389, ldapBindDn: uid=nssproxy,ou=People,dc=xxx,dc=ru, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=xxx,dc=ru, userSearchBase: ou=People,dc=xxx,dc=ru, userSearchScope: 2, userObjectClass: top, userSearchFilter: memberof=CN=soft,OU=Group, DC=xxx,DC=ru, extendedUserSearchFilter: (&(objectclass=top)(memberof=CN=soft,OU=Group, DC=xxx,DC=ru)), userNameAttribute: cn, userSearchAttributes: [cn, memberof, ismemberof], userGroupNameAttributeSet: [memberof, ismemberof], pagedResultsEnabled: true, pagedResultsSize: 10000, groupSearchEnabled: true, groupSearchBase: ou=Group,dc=xxx,dc=ru, groupSearchScope: 2, groupObjectClass: posixGroup, groupSearchFilter: cn=soft, extendedGroupSearchFilter: (&(objectclass=posixGroup)(cn=soft)(memberUid={0})), extendedAllGroupsSearchFilter: (&(objectclass=posixGroup)(cn=soft)), groupMemberAttributeName: memberUid, groupNameAttribute: cn, groupUserMapSyncEnabled: true, ldapReferral: follow
07 Jul 2016 13:31:51 DEBUG LdapUserGroupBuilder [UnixUserSyncThread] - No controls were sent from the server
07 Jul 2016 13:31:51 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder.updateSink() completed with user count: 0
07 Jul 2016 13:31:51 DEBUG LdapUserGroupBuilder [UnixUserSyncThread] - Total No. of users saved = 0
07 Jul 2016 13:31:51 INFO LdapUserGroupBuilder [UnixUserSyncThread] - groupSearch is enabled, would search for groups and compute memberships
07 Jul 2016 13:31:51 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
07 Jul 2016 13:31:51 DEBUG AbstractJavaKeyStoreProvider [UnixUserSyncThread] - backing jks path initialized to file:/usr/hdp/current/ranger-usersync/conf/ugsync.jceks
07 Jul 2016 13:31:51 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with -- ldapUrl: ldap://xxx.389, ldapBindDn: uid=nssproxy,ou=People,dc=xxx,dc=ru, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=xxx,dc=ru, userSearchBase: ou=People,dc=xxx,dc=ru, userSearchScope: 2, userObjectClass: top, userSearchFilter: memberof=CN=soft,OU=Group, DC=xxx,DC=ru, extendedUserSearchFilter: (&(objectclass=top)(memberof=CN=soft,OU=Group, DC=xxx,DC=ru)), userNameAttribute: cn, userSearchAttributes: [cn, memberof, ismemberof], userGroupNameAttributeSet: [memberof, ismemberof], pagedResultsEnabled: true, pagedResultsSize: 10000, groupSearchEnabled: true, groupSearchBase: ou=Group,dc=xxx,dc=ru, groupSearchScope: 2, groupObjectClass: posixGroup, groupSearchFilter: cn=soft, extendedGroupSearchFilter: (&(objectclass=posixGroup)(cn=soft)(memberUid={0})), extendedAllGroupsSearchFilter: (&(objectclass=posixGroup)(cn=soft)), groupMemberAttributeName: memberUid, groupNameAttribute: cn, groupUserMapSyncEnabled: true, ldapReferral: follow
07 Jul 2016 13:31:51 INFO UserGroupSync [UnixUserSyncThread] - End: initial load of user/group from source==>sink
07 Jul 2016 13:31:51 INFO UserGroupSync [UnixUserSyncThread] - Done initializing user/group source and sink
07 Jul 2016 13:31:51 DEBUG UserGroupSync [UnixUserSyncThread] - Sleeping for [3600000] milliSeconds
... View more
07-07-2016
07:52 AM
Hi. thank you, but it does not work, the structure of my LDAP is. CN=soft, OU=Group,dc=xxx,dc=ru in a group is an attribute of this is the username - memberUid CN=<users>, OU=People,dc=xxx,dc=ru in users username this is uid
... View more
07-05-2016
10:48 AM
@deepak sharma, Yes so it works, and ranger synchronizes all users, but I only want users from a specific group.
... View more
07-05-2016
10:24 AM
@deepak sharma
... View more
07-05-2016
09:36 AM
hi @Sagar Shimpi HDP- HDP-2.4.2.0 Ambari - 2.2.2.0 05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder created
05 Jul 2016 12:34:29 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
05 Jul 2016 12:34:29 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
05 Jul 2016 12:34:29 INFO UserGroupSync [UnixUserSyncThread] - initializing source: org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder
05 Jul 2016 12:34:29 INFO UserGroupSync [UnixUserSyncThread] - Begin: initial load of user/group from source==>sink
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder updateSink started
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
05 Jul 2016 12:34:29 DEBUG AbstractJavaKeyStoreProvider [UnixUserSyncThread] - backing jks path initialized to file:/usr/hdp/current/ranger-usersync/conf/ugsync.jceks
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with -- ldapUrl: ldap://xxx.389, ldapBindDn: uid=nssproxy,ou=People,dc=mts,dc=ru, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=mts,dc=ru, userSearchBase: dc=mts,dc=ru, userSearchScope: 2, userObjectClass: person, userSearchFilter: uid={0}, extendedUserSearchFilter: (&(objectclass=person)(uid={0})), userNameAttribute: uid, userSearchAttributes: [uid], userGroupNameAttributeSet: [uid], pagedResultsEnabled: true, pagedResultsSize: 10000, groupSearchEnabled: true, groupSearchBase: ou=Group,dc=mts,dc=ru, groupSearchScope: 2, groupObjectClass: posixGroup, groupSearchFilter: cn=soft, extendedGroupSearchFilter: (&(objectclass=posixGroup)(cn=soft)(memberUid={0})), extendedAllGroupsSearchFilter: (&(objectclass=posixGroup)(cn=soft)), groupMemberAttributeName: memberUid, groupNameAttribute: cn, groupUserMapSyncEnabled: true, ldapReferral: follow
05 Jul 2016 12:34:29 DEBUG LdapUserGroupBuilder [UnixUserSyncThread] - No controls were sent from the server
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder.updateSink() completed with user count: 0
05 Jul 2016 12:34:29 DEBUG LdapUserGroupBuilder [UnixUserSyncThread] - Total No. of users saved = 0
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - groupSearch is enabled, would search for groups and compute memberships
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
05 Jul 2016 12:34:29 DEBUG AbstractJavaKeyStoreProvider [UnixUserSyncThread] - backing jks path initialized to file:/usr/hdp/current/ranger-usersync/conf/ugsync.jceks
05 Jul 2016 12:34:29 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with -- ldapUrl: ldap://xxx.389, ldapBindDn: uid=nssproxy,ou=People,dc=mts,dc=ru, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=mts,dc=ru, userSearchBase: dc=mts,dc=ru, userSearchScope: 2, userObjectClass: person, userSearchFilter: uid={0}, extendedUserSearchFilter: (&(objectclass=person)(uid={0})), userNameAttribute: uid, userSearchAttributes: [uid], userGroupNameAttributeSet: [uid], pagedResultsEnabled: true, pagedResultsSize: 10000, groupSearchEnabled: true, groupSearchBase: ou=Group,dc=mts,dc=ru, groupSearchScope: 2, groupObjectClass: posixGroup, groupSearchFilter: cn=soft, extendedGroupSearchFilter: (&(objectclass=posixGroup)(cn=soft)(memberUid={0})), extendedAllGroupsSearchFilter: (&(objectclass=posixGroup)(cn=soft)), groupMemberAttributeName: memberUid, groupNameAttribute: cn, groupUserMapSyncEnabled: true, ldapReferral: follow
05 Jul 2016 12:34:29 INFO UserGroupSync [UnixUserSyncThread] - End: initial load of user/group from source==>sink
05 Jul 2016 12:34:29 INFO UserGroupSync [UnixUserSyncThread] - Done initializing user/group source and sink
05 Jul 2016 12:34:29 DEBUG UserGroupSync [UnixUserSyncThread] - Sleeping for [3600000] milliSeconds
... View more
07-04-2016
11:49 AM
@Sagar Shimpi thx. I set up through this link, but this synchronization does not work
... View more
07-04-2016
11:32 AM
hi, i want to use LDAP to work with the Ranger, I was set up, but can not find when users synchronize. I want to take all of the users are not LDAP, and from one group - soft 04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder created
04 Jul 2016 14:28:55 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.username.regex
04 Jul 2016 14:28:55 INFO AbstractMapper [UnixUserSyncThread] - Initializing for ranger.usersync.mapping.groupname.regex
04 Jul 2016 14:28:55 INFO UserGroupSync [UnixUserSyncThread] - initializing source: org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder
04 Jul 2016 14:28:55 INFO UserGroupSync [UnixUserSyncThread] - Begin: initial load of user/group from source==>sink
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder updateSink started
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
04 Jul 2016 14:28:55 DEBUG AbstractJavaKeyStoreProvider [UnixUserSyncThread] - backing jks path initialized to file:/usr/hdp/current/ranger-usersync/conf/ugsync.jceks
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with -- ldapUrl: ldap://xxx:389, ldapBindDn: uid=proxy,ou=People,dc=xxx,dc=ru, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=xxx,dc=ru, userSearchBase: dc=xxx,dc=ru, userSearchScope: 2, userObjectClass: person, userSearchFilter: uid={0}, extendedUserSearchFilter: (&(objectclass=person)(uid={0})), userNameAttribute: uid, userSearchAttributes: [uid], userGroupNameAttributeSet: [uid], pagedResultsEnabled: true, pagedResultsSize: 10000, groupSearchEnabled: true, groupSearchBase: ou=Group,dc=xxx,dc=ru, groupSearchScope: 2, groupObjectClass: posixGroup, groupSearchFilter: cn=soft, extendedGroupSearchFilter: (&(objectclass=posixGroup)(cn=soft)(memberUid={0})), extendedAllGroupsSearchFilter: (&(objectclass=posixGroup)(cn=soft)), groupMemberAttributeName: memberUid, groupNameAttribute: cn, groupUserMapSyncEnabled: true, ldapReferral: follow
04 Jul 2016 14:28:55 DEBUG LdapUserGroupBuilder [UnixUserSyncThread] - No controls were sent from the server
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder.updateSink() completed with user count: 0
04 Jul 2016 14:28:55 DEBUG LdapUserGroupBuilder [UnixUserSyncThread] - Total No. of users saved = 0
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - groupSearch is enabled, would search for groups and compute memberships
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization started
04 Jul 2016 14:28:55 DEBUG AbstractJavaKeyStoreProvider [UnixUserSyncThread] - backing jks path initialized to file:/usr/hdp/current/ranger-usersync/conf/ugsync.jceks
04 Jul 2016 14:28:55 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LdapUserGroupBuilder initialization completed with -- ldapUrl: ldap://xxx:389, ldapBindDn: uid=nssproxy,ou=People,dc=xxx,dc=ru, ldapBindPassword: ***** , ldapAuthenticationMechanism: simple, searchBase: dc=xxx,dc=ru, userSearchBase: dc=xxx,dc=ru, userSearchScope: 2, userObjectClass: person, userSearchFilter: uid={0}, extendedUserSearchFilter: (&(objectclass=person)(uid={0})), userNameAttribute: uid, userSearchAttributes: [uid], userGroupNameAttributeSet: [uid], pagedResultsEnabled: true, pagedResultsSize: 10000, groupSearchEnabled: true, groupSearchBase: ou=Group,dc=xxx,dc=ru, groupSearchScope: 2, groupObjectClass: posixGroup, groupSearchFilter: cn=soft, extendedGroupSearchFilter: (&(objectclass=posixGroup)(cn=soft)(memberUid={0})), extendedAllGroupsSearchFilter: (&(objectclass=posixGroup)(cn=soft)), groupMemberAttributeName: memberUid, groupNameAttribute: cn, groupUserMapSyncEnabled: true, ldapReferral: follow
... View more
Labels: