Member since
03-21-2016
38
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12048 | 01-29-2017 09:04 AM | |
917 | 01-04-2017 08:19 AM | |
556 | 10-12-2016 05:36 AM | |
907 | 10-12-2016 05:26 AM |
10-04-2018
06:27 AM
I have changed the default hive 1.2 to hive 2.1 in hive-env.sh and then it worked properly.
... View more
06-20-2017
06:31 PM
Team, I am using HDP-2.6 and kerberos with hive1.2 and hive2.1 enabled. I have installed interactive hiveserver2 and it is running properly. I am facing the issue with beeline. I have spark2 installed and spark thrift server running. When I try to access from hive1.2-beeline, it works properly Beeline version 1.2.1000.2.6.0.3-8 by Apache Hive
beeline> !connect jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
Connecting to jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
Enter username for jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: rahul
Enter password for jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: ***********
Connected to: Spark SQL (version 2.1.0.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ But when I try to access from hive2.1-beeline it doesn't work. Can you please help where I am doing wrong? Beeline version 2.1.0.2.6.0.3-8 by Apache Hive
beeline> !connect jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
Connecting to jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
17/06/20 22:36:22 [main]: ERROR jdbc.HiveConnection: Error opening session
org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default})
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:168) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:155) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:578) [hive-jdbc-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:188) [hive-jdbc-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) [hive-jdbc-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at java.sql.DriverManager.getConnection(DriverManager.java:664) [?:1.8.0_121]
at java.sql.DriverManager.getConnection(DriverManager.java:208) [?:1.8.0_121]
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:209) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.Commands.connect(Commands.java:1509) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.Commands.connect(Commands.java:1404) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1104) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1143) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:976) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:886) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:502) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:485) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at org.apache.hadoop.util.RunJar.run(RunJar.java:233) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:148) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
17/06/20 22:36:23 [main]: WARN jdbc.HiveConnection: Failed to connect to hdp-qa7-n1.example.com:10016
Error: Could not open client transport with JDBC Uri: jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: Could not establish connection to jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) (state=08S01,code=0)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
06-01-2017
10:17 AM
@English Rose
Yes I confirm that the settings which I mentioned is working properly.
... View more
01-30-2017
10:42 AM
@David Streever @Daniel Kozlowski Since my NN are in HA mode, so I have to provide the HA name in xasecure.audit.destination.hdfs.property as hdfs://cluster-nameservice:8020/ranger/audit. Also I added a new property xasecure.audit.destination.hdfs.batch.filespool.archive.dir=/var/log/hadoop/hdfs/audit/hdfs/spool/archive Now logs are coming in archive folder but the files are very big. [root@meybgdlpmst3] # ls -lh /var/log/hadoop/hdfs/audit/hdfs/spool/archive
total 14G
-rw-r--r-- 1 hdfs hadoop 6.0G Jan 29 03:44 spool_hdfs_20170128-0344.33.log
-rw-r--r-- 1 hdfs hadoop 7.9G Jan 30 03:44 spool_hdfs_20170129-0344.37.log Now, is there any property for the log files so that they get compressed automatically as increasing the logs everyday in this stream will make the disk full at one day?? Thanks, Rahul
... View more
01-29-2017
09:04 AM
Hi Team and @Sagar Shimpi, The below steps helped me to resolve the issue. 1) As I am using HDP-2.4.2, so I need to download the jar from http://www.apache.org/dyn/closer.cgi/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.tar.gz 2) Extract the tar file and copy apache-log4j-extras-1.2.17.jar to ALL the cluster nodes in /usr/hdp/<version>/hadoop-hdfs/lib location. Note: Also you can find apache-log4j-extras-1.2.17.jar in /usr/hdp/<version>/hive/lib folder. I found it later. 3) Then edit in advanced hdfs-log4j property from ambari and replace the default hdfs-audit log4j properties as hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%
log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=30
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=16106127360
## The figure 16106127360 is in bytes which is equal to 15GB ##
log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz
The output of the hdfs audit log files in .gz are: -rw-r--r-- 1 hdfs hadoop 384M Jan 28 23:51 hdfs-audit-2.log.gz
-rw-r--r-- 1 hdfs hadoop 347M Jan 29 07:40 hdfs-audit-1.log.gz
... View more
01-27-2017
07:17 AM
Hi Team, The audit hdfs spool log files comes directly in /var/log/hadoop/hdfs/audit/hdfs/spool directory. [root@meybgdlpmst3] # pwd
/var/log/hadoop/hdfs/audit/hdfs/spool
[root@meybgdlpmst3(172.23.34.6)] # ls -lh
total 20G
drwxr-xr-x 2 hdfs hadoop 4.0K Jan 7 06:57 archive
-rw-r--r-- 1 hdfs hadoop 23K Jan 26 14:30 index_batch_batch.hdfs_hdfs_closed.json
-rw-r--r-- 1 hdfs hadoop 6.1K Jan 27 11:05 index_batch_batch.hdfs_hdfs.json
-rw-r--r-- 1 hdfs hadoop 7.8G Jan 25 03:43 spool_hdfs_20170124-0343.41.log
-rw-r--r-- 1 hdfs hadoop 6.6G Jan 26 03:43 spool_hdfs_20170125-0343.43.log
-rw-r--r-- 1 hdfs hadoop 3.9G Jan 27 03:44 spool_hdfs_20170126-0344.05.log
-rw-r--r-- 1 hdfs hadoop 1.6G Jan 27 11:05 spool_hdfs_20170127-0344.22.log
[root@meybgdlpmst3] # ll archive/
total 0
Attached is the above spool directories configured under ranger-hdfs-audit section, but still the log files doesn't comes under archive folder and hence consumes too much disk space. Is there any additional configuration needs to be done?? Any help will be highly appreciated. Thanks, Rahul
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
01-20-2017
09:39 AM
@Sagar Shimpi I checked the above links but I didn't find how to compress and zip the log file automatically if it reaches the specified MaxFileSize. I need to compress the log files and keep it upto 30days which after then should get deleted automatically. So what are the additional properties should I need to add to make .gz files for hdfs-audit logs?? At present my property is set as: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=1GB
log4j.appender.DRFAAUDIT.MaxBackupIndex=30
... View more
01-19-2017
02:40 PM
Hi Team, I want to rotate and archive(in .gz) hdfs-audit log files on size based but after reaching 350KB of size, the file is not getting archived. The properties I have set in hdfs-log4j is: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
#log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=hdfs-audit-%d{yyyy-MM}.gz
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=350KB
log4j.appender.DRFAAUDIT.MaxBackupIndex=9 Any help will be highly appreciated.
... View more
Labels:
- Labels:
-
Apache Hadoop
01-04-2017
08:19 AM
Finally I am able to resolve the issue. 1. I changed the script extension from ambari_ldap_sync_all.sh to ambari_ldap_sync_all.exp 2. I also changed the absolute path of ambari-server as /usr/sbin/ambari-server and added exit statement at the end of script. #!/usr/bin/expect
spawn /usr/sbin/ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
exit 3. Finally inside the crontab, I made the entry as 0 23 * * * /usr/bin/expect /opt/ambari_ldap_sync_all.exp
... View more
12-27-2016
07:41 AM
@Jay SenSharma Yes expect package is already installed. Actually my issue is the script is not getting executed at 3pm even though it is configured in crontab. As I said, when I ran the command manually as ./ambari_ldap_sync_all.sh then it works. So is there any alternative how the script can get executed automatically from crontab?
... View more
12-27-2016
07:22 AM
Hi Team, I have used ambari-ldap sync script but I get the following
error when I ran the below command. One thing I noticed is that if the
run the script manually as ./ambari_ldap_sync_all.sh then its getting
executed. Also I have shown my ambari-ldap sync script below. So the script is
not getting executed from crontab with 'sh' command . [root@host1(172.23.34.4)] # sh ambari_ldap_sync_all.sh
ambari_ldap_sync_all.sh: line 3: spawn: command not found
couldn't read file "Enter Ambari Admin login:": no such file or directory
ambari_ldap_sync_all.sh: line 7: send: command not found
couldn't read file "Enter Ambari Admin password:": no such file or directory
ambari_ldap_sync_all.sh: line 11: send: command not found
couldn't read file "eof": no such file or directory
[root@host1(172.23.34.4)] # cat ambari_ldap_sync_all.sh
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
[root@host1(172.23.34.4)] # crontab -e
00 15 * * * /ambari_ldap_sync_all.sh
Can someone help me how to write expect script if it is required??
... View more
Labels:
- Labels:
-
Apache Ambari
12-23-2016
11:41 AM
@Neeraj Sabharwal Hi Neeraj, I have used your ambari-ldap sync script but I get the following error when I ran the below command. One thing I noticed is that if the run the script manually as ./ambari_ldap_sync_all.sh then its getting executed. Also I have shown my ambari-ldap sync script below. So the script is not getting executed from crontab with 'sh' command . Please help. [root@host1(172.23.34.4)] # sh ambari_ldap_sync_all.sh
ambari_ldap_sync_all.sh: line 3: spawn: command not found
couldn't read file "Enter Ambari Admin login:": no such file or directory
ambari_ldap_sync_all.sh: line 7: send: command not found
couldn't read file "Enter Ambari Admin password:": no such file or directory
ambari_ldap_sync_all.sh: line 11: send: command not found
couldn't read file "eof": no such file or directory
[root@host1(172.23.34.4)] # cat ambari_ldap_sync_all.sh
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
[root@host1(172.23.34.4)] # crontab -e
00 15 * * * /ambari_ldap_sync_all.sh
... View more
11-27-2016
08:26 AM
@jss Thanks a lot. That solved my issue and I am not getting DN heapsize alerts anymore.
... View more
11-13-2016
12:05 PM
@jss I am using oracle jdk 1.7 and hadoop components are also using jdk 1.7 version. [root@mst1 hadoop]# su - hdfs
[hdfs@mst1 ~]$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@edge1 ~]# su - hdfs
[hdfs@edge1 ~]$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
Java versions are correct but still jps not working in edge node.
... View more
11-13-2016
09:09 AM
1 Kudo
Hi Team, Running jps command in edge node shows "Process Information Unavailable" but when I run the same command in master node, it shows all the process clearly. Any help would be appreciated. [root@edge1 ~]# jps
20712 -- process information unavailable
23022 -- process information unavailable
21612 -- process information unavailable
20888 -- process information unavailable
22085 -- process information unavailable
22650 -- process information unavailable
19360 -- process information unavailable
22768 -- process information unavailable
22324 -- process information unavailable
29348 -- process information unavailable
20591 -- process information unavailable
19212 -- process information unavailable
24930 Main
20433 -- process information unavailable
22509 -- process information unavailable
22878 -- process information unavailable
20692 -- process information unavailable
[root@edge1 ~]# java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@edge1 ~]# echo $JAVA_HOME
/usr/java/jdk1.7.0_67
[root@edge1 ~]# which jps
/usr/java/jdk1.7.0_67/bin/jps
[root@mst1 ~]# jps
5799 Elasticsearch
20959 RunJar
3493 SparkSubmit
15483 JobHistoryServer
14100 AmbariServer
21783 jar
31019 Jps
4617 EmbeddedServer
8597 HMaster
2859 Main
4415 JournalNode
12748 Bootstrap
6858 NameNode
30657 UnixAuthenticationService
9303 RunJar
10945 HistoryServer
4698 QuorumPeerMain
11951 ApplicationHistoryServer
19142 RunJar
5819 DFSZKFailoverController
16870 ResourceManager
10152 HRegionServer
[root@mst1 ~]# echo $JAVA_HOME
/usr/java/jdk1.7.0_67
[root@mst1 ~]# which java
/usr/java/jdk1.7.0_67/bin/java
[root@mst1 ~]# which jps
/usr/java/jdk1.7.0_67/bin/jps
[root@mst1 ~]# java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
... View more
- Tags:
- Hadoop Core
- java
11-03-2016
04:47 AM
Hi Team, I am using HDP 2.4.2 and I have 39 Datanodes in a cluster. Initially, there was 1GB heapsize of DN by default. Then it started to send warning alerts from every datanode even though there was no ingestion/job running. So, I increased the DN heapsize to 2GB but still it is sending me alerts consuming 60-70% heapsize and sometimes 80-90% even though cluster is idle. Is there any calculation/formula how much heapsize should I provide in Datanodes?? Please help. Thanks, Rahul
... View more
Labels:
- Labels:
-
Apache Hadoop
10-12-2016
05:36 AM
1 Kudo
@Constantin Stanca Hi Constantin, The issue was that a hadoop folder got created previously under /usr/hdp folder since there should be only 2 folders named 2.4.2.0-258 and current under /usr/hdp. There should not be any additional folders apart from two folders. After removing the hadoop folder from /usr/hdp, the issue got resolved. Thanks, Rahul
... View more
10-12-2016
05:26 AM
1 Kudo
@Deepak Sharma @Vipin Rathor Hi All, The users are in ou=staff,ou=lab,ou=users,dc=example,dc=com and groups are in ou=Groups,dc=example,dc=com and also users are syncing properly. In groups config, everything was correct. But the issue was in user search base which I had initially given as ou=lab,ou=users,dc=example,dc=com. So I changed the user search base to ou=staff,ou=lab,ou=users,dc=example,dc=com and then all my groups started to sync. Finally I can see my groups under groups tab in ranger. Thank you all for all the help and ideas you provided. Thanks, Rahul
... View more
09-26-2016
11:46 AM
@Deepak Sharma Hi Deepak, Users that belong to the groups other than those 4 syncd groups are syncing properly. I dont have any issue in user sync, I have issues only with group sync. Thanks, Rahul
... View more
09-26-2016
10:16 AM
Hi Team, I have configured group config in ambari as: Group Member Attribute=member Group Name Attribute: cn Group Object Class: group Group Search Base: ou=Groups,dc=example,dc=com Group Search Filter: cn=* ranger.usersync.ldap.referral: follow I have done ldapsearch for one group bdg_itadmin_s as shown below: # bdg_itadmin_s, example, example.com
dn: CN=bdg_itadmin_s,OU=Groups,DC=example,DC=com
objectClass: top
objectClass: group
cn: bdg_itadmin_s
distinguishedName: CN=bdg_itadmin_s,OU=Groups,DC=example,DC=com
instanceType: 4
whenCreated: 20160926083435.0Z
whenChanged: 20160926083435.0Z
uSNCreated: 11545972
uSNChanged: 11545972
name: bdg_itadmin_s
objectGUID:: iTJZ3zcD9UK6Xi40sxRB3A==
objectSid:: AQUAAAAAAAUVAAAADqCFIi054a3apg99awsAAA==
sAMAccountName: bdg_itadmin_s
sAMAccountType: 268435456
groupType: -2147483646
objectCategory: CN=Group,CN=Schema,CN=Configuration,DC=example,DC=com
dSCorePropagationData: 16010101000000.0Z # search result
search: 2
result: 0 Success
control: 1.2.840.113556.1.4.319 false MIQAAAAFAgEABAA=
pagedresults: cookie= # numResponses: 2
# numEntries: 1 Also we have 15 groups configured in AD, however we are able to see only 4 groups in ranger after restarting ranger. I am attaching the screenshot for your kind review. Can you please help us here?? Regards, Rahul
... View more
Labels:
- Labels:
-
Apache Ranger
09-08-2016
09:32 PM
Hi Team, During installation from ambari, the HDP clients successfully got installed in one of our datanodes. However it seems like there is some error in datanode which we are not able to troubleshoot the issue. The heartbeat response is receiving as can be seen in ambari-agent.log file.I am attaching the screenshot for your kind review. Can you please help us here??
Thanks, Rahul
... View more
Labels:
- Labels:
-
Apache Ambari
08-25-2016
09:53 AM
@Harini Yadav Hi Harini, Thanks for the answer. However can you please help us in raising this concern as a bug for the above subject so that it will get available in the next release. Thanks, Rahul
... View more
08-25-2016
08:46 AM
1 Kudo
ldaps-new.jpg
Hi Team, I am trying to integrate Ranger with two HA Active Directory Servers. But how shall I configure two AD servers together from ambari in ranger while integrating with AD so that in case one AD server goes down, it will work from the secondary AD server automatically. Can you please let us know how it can be implemented?? I have two AD servers in HA mode, primary is 192.168.1.4 and secondary is 192.168.1.8. I am also attaching the screenshot where only one AD server is configured. Thanks, Rahul
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger