Member since
03-21-2016
38
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5984 | 01-29-2017 09:04 AM | |
415 | 01-04-2017 08:19 AM | |
253 | 10-12-2016 05:36 AM | |
409 | 10-12-2016 05:26 AM |
10-04-2018
06:27 AM
I have changed the default hive 1.2 to hive 2.1 in hive-env.sh and then it worked properly.
... View more
06-20-2017
06:31 PM
Team, I am using HDP-2.6 and kerberos with hive1.2 and hive2.1 enabled. I have installed interactive hiveserver2 and it is running properly. I am facing the issue with beeline. I have spark2 installed and spark thrift server running. When I try to access from hive1.2-beeline, it works properly Beeline version 1.2.1000.2.6.0.3-8 by Apache Hive
beeline> !connect jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
Connecting to jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
Enter username for jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: rahul
Enter password for jdbc:hive2://hdp-qa7-n1.example.com:10016/default;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: ***********
Connected to: Spark SQL (version 2.1.0.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ But when I try to access from hive2.1-beeline it doesn't work. Can you please help where I am doing wrong? Beeline version 2.1.0.2.6.0.3-8 by Apache Hive
beeline> !connect jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
Connecting to jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM
17/06/20 22:36:22 [main]: ERROR jdbc.HiveConnection: Error opening session
org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default})
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:168) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:155) ~[hive-exec-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:578) [hive-jdbc-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:188) [hive-jdbc-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) [hive-jdbc-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at java.sql.DriverManager.getConnection(DriverManager.java:664) [?:1.8.0_121]
at java.sql.DriverManager.getConnection(DriverManager.java:208) [?:1.8.0_121]
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:209) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.Commands.connect(Commands.java:1509) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.Commands.connect(Commands.java:1404) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1104) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1143) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:976) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:886) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:502) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:485) [hive-beeline-2.1.0.2.6.0.3-8.jar:2.1.0.2.6.0.3-8]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
at org.apache.hadoop.util.RunJar.run(RunJar.java:233) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:148) [hadoop-common-2.7.3.2.6.0.3-8.jar:?]
17/06/20 22:36:23 [main]: WARN jdbc.HiveConnection: Failed to connect to hdp-qa7-n1.example.com:10016
Error: Could not open client transport with JDBC Uri: jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: Could not establish connection to jdbc:hive2://hdp-qa7-n1.example.com:10016/;principal=hive/hdp-qa7-n1.example.com@EXAMPLE.COM: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null, configuration:{use:database=default}) (state=08S01,code=0)
... View more
Labels:
06-01-2017
10:17 AM
@English Rose
Yes I confirm that the settings which I mentioned is working properly.
... View more
04-11-2017
09:52 AM
Hi Team, The jobs get hung in accepted queue and the AM consumes more memory and CPU without executing anything. Do we need to do more tuning in yarn? I have attached the screenshot for your reference. Thanks, Rahul
... View more
Labels:
01-30-2017
10:42 AM
@David Streever @Daniel Kozlowski Since my NN are in HA mode, so I have to provide the HA name in xasecure.audit.destination.hdfs.property as hdfs://cluster-nameservice:8020/ranger/audit. Also I added a new property xasecure.audit.destination.hdfs.batch.filespool.archive.dir=/var/log/hadoop/hdfs/audit/hdfs/spool/archive Now logs are coming in archive folder but the files are very big. [root@meybgdlpmst3] # ls -lh /var/log/hadoop/hdfs/audit/hdfs/spool/archive
total 14G
-rw-r--r-- 1 hdfs hadoop 6.0G Jan 29 03:44 spool_hdfs_20170128-0344.33.log
-rw-r--r-- 1 hdfs hadoop 7.9G Jan 30 03:44 spool_hdfs_20170129-0344.37.log Now, is there any property for the log files so that they get compressed automatically as increasing the logs everyday in this stream will make the disk full at one day?? Thanks, Rahul
... View more
01-29-2017
09:04 AM
Hi Team and @Sagar Shimpi, The below steps helped me to resolve the issue. 1) As I am using HDP-2.4.2, so I need to download the jar from http://www.apache.org/dyn/closer.cgi/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.tar.gz 2) Extract the tar file and copy apache-log4j-extras-1.2.17.jar to ALL the cluster nodes in /usr/hdp/<version>/hadoop-hdfs/lib location. Note: Also you can find apache-log4j-extras-1.2.17.jar in /usr/hdp/<version>/hive/lib folder. I found it later. 3) Then edit in advanced hdfs-log4j property from ambari and replace the default hdfs-audit log4j properties as hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%
log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=30
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=16106127360
## The figure 16106127360 is in bytes which is equal to 15GB ##
log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz
The output of the hdfs audit log files in .gz are: -rw-r--r-- 1 hdfs hadoop 384M Jan 28 23:51 hdfs-audit-2.log.gz
-rw-r--r-- 1 hdfs hadoop 347M Jan 29 07:40 hdfs-audit-1.log.gz
... View more
01-27-2017
07:17 AM
Hi Team, The audit hdfs spool log files comes directly in /var/log/hadoop/hdfs/audit/hdfs/spool directory. [root@meybgdlpmst3] # pwd
/var/log/hadoop/hdfs/audit/hdfs/spool
[root@meybgdlpmst3(172.23.34.6)] # ls -lh
total 20G
drwxr-xr-x 2 hdfs hadoop 4.0K Jan 7 06:57 archive
-rw-r--r-- 1 hdfs hadoop 23K Jan 26 14:30 index_batch_batch.hdfs_hdfs_closed.json
-rw-r--r-- 1 hdfs hadoop 6.1K Jan 27 11:05 index_batch_batch.hdfs_hdfs.json
-rw-r--r-- 1 hdfs hadoop 7.8G Jan 25 03:43 spool_hdfs_20170124-0343.41.log
-rw-r--r-- 1 hdfs hadoop 6.6G Jan 26 03:43 spool_hdfs_20170125-0343.43.log
-rw-r--r-- 1 hdfs hadoop 3.9G Jan 27 03:44 spool_hdfs_20170126-0344.05.log
-rw-r--r-- 1 hdfs hadoop 1.6G Jan 27 11:05 spool_hdfs_20170127-0344.22.log
[root@meybgdlpmst3] # ll archive/
total 0
Attached is the above spool directories configured under ranger-hdfs-audit section, but still the log files doesn't comes under archive folder and hence consumes too much disk space. Is there any additional configuration needs to be done?? Any help will be highly appreciated. Thanks, Rahul
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
01-20-2017
09:39 AM
@Sagar Shimpi I checked the above links but I didn't find how to compress and zip the log file automatically if it reaches the specified MaxFileSize. I need to compress the log files and keep it upto 30days which after then should get deleted automatically. So what are the additional properties should I need to add to make .gz files for hdfs-audit logs?? At present my property is set as: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=1GB
log4j.appender.DRFAAUDIT.MaxBackupIndex=30
... View more
01-19-2017
02:40 PM
Hi Team, I want to rotate and archive(in .gz) hdfs-audit log files on size based but after reaching 350KB of size, the file is not getting archived. The properties I have set in hdfs-log4j is: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
#log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=hdfs-audit-%d{yyyy-MM}.gz
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=350KB
log4j.appender.DRFAAUDIT.MaxBackupIndex=9 Any help will be highly appreciated.
... View more
Labels:
01-04-2017
08:19 AM
Finally I am able to resolve the issue. 1. I changed the script extension from ambari_ldap_sync_all.sh to ambari_ldap_sync_all.exp 2. I also changed the absolute path of ambari-server as /usr/sbin/ambari-server and added exit statement at the end of script. #!/usr/bin/expect
spawn /usr/sbin/ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
exit 3. Finally inside the crontab, I made the entry as 0 23 * * * /usr/bin/expect /opt/ambari_ldap_sync_all.exp
... View more
12-27-2016
07:41 AM
@Jay SenSharma Yes expect package is already installed. Actually my issue is the script is not getting executed at 3pm even though it is configured in crontab. As I said, when I ran the command manually as ./ambari_ldap_sync_all.sh then it works. So is there any alternative how the script can get executed automatically from crontab?
... View more
12-27-2016
07:22 AM
Hi Team, I have used ambari-ldap sync script but I get the following
error when I ran the below command. One thing I noticed is that if the
run the script manually as ./ambari_ldap_sync_all.sh then its getting
executed. Also I have shown my ambari-ldap sync script below. So the script is
not getting executed from crontab with 'sh' command . [root@host1(172.23.34.4)] # sh ambari_ldap_sync_all.sh
ambari_ldap_sync_all.sh: line 3: spawn: command not found
couldn't read file "Enter Ambari Admin login:": no such file or directory
ambari_ldap_sync_all.sh: line 7: send: command not found
couldn't read file "Enter Ambari Admin password:": no such file or directory
ambari_ldap_sync_all.sh: line 11: send: command not found
couldn't read file "eof": no such file or directory
[root@host1(172.23.34.4)] # cat ambari_ldap_sync_all.sh
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
[root@host1(172.23.34.4)] # crontab -e
00 15 * * * /ambari_ldap_sync_all.sh
Can someone help me how to write expect script if it is required??
... View more
Labels:
12-23-2016
11:41 AM
@Neeraj Sabharwal Hi Neeraj, I have used your ambari-ldap sync script but I get the following error when I ran the below command. One thing I noticed is that if the run the script manually as ./ambari_ldap_sync_all.sh then its getting executed. Also I have shown my ambari-ldap sync script below. So the script is not getting executed from crontab with 'sh' command . Please help. [root@host1(172.23.34.4)] # sh ambari_ldap_sync_all.sh
ambari_ldap_sync_all.sh: line 3: spawn: command not found
couldn't read file "Enter Ambari Admin login:": no such file or directory
ambari_ldap_sync_all.sh: line 7: send: command not found
couldn't read file "Enter Ambari Admin password:": no such file or directory
ambari_ldap_sync_all.sh: line 11: send: command not found
couldn't read file "eof": no such file or directory
[root@host1(172.23.34.4)] # cat ambari_ldap_sync_all.sh
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
[root@host1(172.23.34.4)] # crontab -e
00 15 * * * /ambari_ldap_sync_all.sh
... View more
12-20-2016
04:06 PM
@Jay SenSharma I am using ambari 2.2.2 and I am fetching users from Active Directory LDAP
... View more
12-20-2016
03:57 PM
Hi Team, We are working for a client and they have 11000 users in AD. From ambari, I tried ldap sync using the command ambari-server sync-ldap --all but I get the following error as: Syncing all....................................................ERROR: Exiting with exit code 1.
REASON:
Caught exception running LDAP sync. [LDAP: error code 4 - Sizelimit
Exceeded]; nested exception is javax.naming.SizeLimitExceededException:
[LDAP: error code 4 - Sizelimit Exceeded]; Please help what property should I need to add in ambari properties file. Thanks, Rahul
... View more
Labels:
12-09-2016
10:47 AM
@Sindhu We have a client who is working with hue. Hue is integrated with AD. The users who logs in hue are able to run the queries and execute jobs, even from my user I tried to execute queries and able to view the logs, but how only one user is not able to view the logs?
... View more
12-09-2016
10:31 AM
Hi Team, I am using HUE-3.8.1 with HDP-2.4.2. While clicking on EXECUTE button in hive query from hue, the error throws as: Couldn't find log associated with operation handle: OperationHandle
[opType=EXECUTE_STATEMENT,
getHandleIdentifier()=7e5d0a7a-8a48-43a7-92cc-a8e2a2dc2543] The error occurs only for one particular user. Other users are able to able to execute the same and able to view the logs. I checked in hiveserver2.log file and its showing for that particular user as HiveServer2-Handler-Pool: Thread-90]: operation.Operation
(Operation.java:createOperationLog(206)) - Unable to create operation
log file:
/tmp/hive/operation_logs/1bae153d-d7a8-4072-996a-5056cce93fda/fcfdb6ea-34f2-4ee3-b
java.io.IOException: No such file or directory But I am not able to understand how other users are able to fetch the directory but only one user is not able to find the directory I found that the directory exists and have permissions under /tmp folder ll /tmp/hive |grep operation_logs
drwxr-xr-x 5 hive hadoop 4096 Dec 9 14:25 operation_logs Please help. Thanks Rahul
... View more
Labels:
11-28-2016
06:31 AM
@Robert Levas Hi Robert, ssh and telnet happens faster to second AD when the first AD is down.
... View more
11-27-2016
08:26 AM
@jss Thanks a lot. That solved my issue and I am not getting DN heapsize alerts anymore.
... View more
11-13-2016
12:05 PM
@jss I am using oracle jdk 1.7 and hadoop components are also using jdk 1.7 version. [root@mst1 hadoop]# su - hdfs
[hdfs@mst1 ~]$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@edge1 ~]# su - hdfs
[hdfs@edge1 ~]$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
Java versions are correct but still jps not working in edge node.
... View more
11-13-2016
09:09 AM
1 Kudo
Hi Team, Running jps command in edge node shows "Process Information Unavailable" but when I run the same command in master node, it shows all the process clearly. Any help would be appreciated. [root@edge1 ~]# jps
20712 -- process information unavailable
23022 -- process information unavailable
21612 -- process information unavailable
20888 -- process information unavailable
22085 -- process information unavailable
22650 -- process information unavailable
19360 -- process information unavailable
22768 -- process information unavailable
22324 -- process information unavailable
29348 -- process information unavailable
20591 -- process information unavailable
19212 -- process information unavailable
24930 Main
20433 -- process information unavailable
22509 -- process information unavailable
22878 -- process information unavailable
20692 -- process information unavailable
[root@edge1 ~]# java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
[root@edge1 ~]# echo $JAVA_HOME
/usr/java/jdk1.7.0_67
[root@edge1 ~]# which jps
/usr/java/jdk1.7.0_67/bin/jps
[root@mst1 ~]# jps
5799 Elasticsearch
20959 RunJar
3493 SparkSubmit
15483 JobHistoryServer
14100 AmbariServer
21783 jar
31019 Jps
4617 EmbeddedServer
8597 HMaster
2859 Main
4415 JournalNode
12748 Bootstrap
6858 NameNode
30657 UnixAuthenticationService
9303 RunJar
10945 HistoryServer
4698 QuorumPeerMain
11951 ApplicationHistoryServer
19142 RunJar
5819 DFSZKFailoverController
16870 ResourceManager
10152 HRegionServer
[root@mst1 ~]# echo $JAVA_HOME
/usr/java/jdk1.7.0_67
[root@mst1 ~]# which java
/usr/java/jdk1.7.0_67/bin/java
[root@mst1 ~]# which jps
/usr/java/jdk1.7.0_67/bin/jps
[root@mst1 ~]# java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
... View more
- Tags:
- Hadoop Core
- java
11-03-2016
04:47 AM
Hi Team, I am using HDP 2.4.2 and I have 39 Datanodes in a cluster. Initially, there was 1GB heapsize of DN by default. Then it started to send warning alerts from every datanode even though there was no ingestion/job running. So, I increased the DN heapsize to 2GB but still it is sending me alerts consuming 60-70% heapsize and sometimes 80-90% even though cluster is idle. Is there any calculation/formula how much heapsize should I provide in Datanodes?? Please help. Thanks, Rahul
... View more
Labels:
10-20-2016
07:10 AM
@Robert Levas Yes the primary AD server was down. Lets have the scenario as first primary AD went down due to network maintenance and then the secondary AD should work. DNS is replicated form the primary AD server. So DNS is able to resolve properly as DNS is configured in both primary and secondary AD servers. I also added the secondary AD IP in /etc/resolv.conf file in all the nodes and made the entry in DNS servers too. But still the issue persists. Any other solution??
... View more
10-19-2016
11:43 AM
@Robert Levas Hi Robert, I configured the back up AD server in advanced-krb5 section in ambari and then restarted the kerberos service. I am able to do kinit from my user, but I get alerts in hdfs and hive as both are not able to kinit to backup AD server. Even hadoop fs -ls / shows listing of files from HDFS but it takes nearly about 30-40 seconds to get executed. Thanks, Rahul
... View more
10-19-2016
11:26 AM
1 Kudo
Hi Team, The region servers gets stopped automatically and it throws error. We have increased the maxClient Connections to 200 in zookeeper but still it is throwing errors. Please help. 2016-10-19 02:01:04,905 INFO [main-SendThread(host2.example.com:2181)] zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper service, session 0x3579f98459f001c has expired, closing socket connection
2016-10-19 02:01:04,906 FATAL [main-EventThread] regionserver.HRegionServer: ABORTING region server host1.example.com,16020,1476372293108: regionserver:16020-0x3579f98459f001c, quorum=host1.example.com:2181,host2.example.com:2181,host3.example.com:2181, baseZNode=/hbase-secure regionserver:16020-0x3579f98459f001c received expired from ZooKeeper, aborting
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:613)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:524)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:534)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
Thanks, Rahul
... View more
Labels:
10-17-2016
04:14 PM
@Robert Levas
Hi Robert, Thanks a lot for the detailed information. I will implement this and let you know if I face any issue. Thanks, Rahul
... View more
10-17-2016
11:52 AM
Hi Team, Can I configure an additional AD in kerberized cluster through ambari as I am using HDP-2.4.2 with Ambari-2.2.2 ?? The reason I am asking this is because, in case the first AD server goes down, then the hadoop services should be able to kinit with the additional AD server. Otherwise we are blocked and have to wait until the AD server is accessible. Any thoughts? Thanks, Rahul
... View more
10-12-2016
05:36 AM
1 Kudo
@Constantin Stanca Hi Constantin, The issue was that a hadoop folder got created previously under /usr/hdp folder since there should be only 2 folders named 2.4.2.0-258 and current under /usr/hdp. There should not be any additional folders apart from two folders. After removing the hadoop folder from /usr/hdp, the issue got resolved. Thanks, Rahul
... View more
10-12-2016
05:26 AM
1 Kudo
@Deepak Sharma @Vipin Rathor Hi All, The users are in ou=staff,ou=lab,ou=users,dc=example,dc=com and groups are in ou=Groups,dc=example,dc=com and also users are syncing properly. In groups config, everything was correct. But the issue was in user search base which I had initially given as ou=lab,ou=users,dc=example,dc=com. So I changed the user search base to ou=staff,ou=lab,ou=users,dc=example,dc=com and then all my groups started to sync. Finally I can see my groups under groups tab in ranger. Thank you all for all the help and ideas you provided. Thanks, Rahul
... View more
09-26-2016
11:46 AM
@Deepak Sharma Hi Deepak, Users that belong to the groups other than those 4 syncd groups are syncing properly. I dont have any issue in user sync, I have issues only with group sync. Thanks, Rahul
... View more