Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6715 | 09-21-2018 03:16 AM | |
3226 | 07-25-2018 05:03 AM | |
4179 | 02-13-2018 02:00 AM | |
1953 | 01-21-2018 02:47 AM | |
38058 | 08-08-2017 10:32 AM |
06-12-2017
01:44 AM
1 Kudo
1) Edge node normally have Hadoop Client installed, using this HDFS
client is responsible for data copy/move to DataNode and Metadata stored
in Namenode 2) HDFS clent act as :- staging/intermediate layer for DN and NM. --
the client contacts the NameNode.TheNameNode inserts the file name into
the file system hierarchy and allocates a data block for it.
TheNameNode responds to the client request with the identity of the DataNode and the destination data block.
3) In turn worker node doesn't have any role to play here. Is my
understanding right? :- No,. The actual task will be done by worker node
only, as it JOB assigned by the Resource Manager. Job Work Flow :- HDFS
Client -> Namenode ->Resource Manager -> Worker/Data Node
->once all MR task completed Datanode will have actual data and Meta
Data Stored Namenode. 4) Normally they separate edge node, master node and data nodes, resource manager node. Edge Node :- Will have batch user id, which is responsible for running the batch. Data Node:- Contain the Physical Data of Hadoop Cluster . Name Node :- will have metadata of Hadoop Cluster. is this help full !
... View more
06-09-2017
03:47 AM
2) HDFS clent act as :- staging/intermediate layer for DN and NM. --
the client contacts the NameNode.TheNameNode inserts the file name into
the file system hierarchy and allocates a data block for it.
TheNameNode responds to the client request with the identity of the DataNode and the destination data block.
3) In turn worker node doesn't have any role to play here. Is my understanding right? :- No,.
The actual task will be done by worker node only, as it JOB assigned by the Resource Manager.
Job Work Flow :-
HDFS Client -> Namenode ->Resource Manager -> Worker/Data Node ->once all MR task completed Datanode will have actual data and Meta Data Stored Namenode.
... View more
06-08-2017
10:10 AM
Will edge node holds the place for that staging layer? 1) Edge node normally have Hadoop Client installed, using this HDFS client is responsible for data copy/move to DataNode and Metadata stored in Namenode 2) HDFS clent act as :- staging/intermediate layer for DN and NM. 3) Normally they separate edge node, master node and data nodes, resource manager node. Edge Node :- Will have batch user id, which is responsible for running the batch. Data Node:- Contain the Physical Data of Hadoop Cluster . Name Node :- will have metadata of Hadoop Cluster.
... View more
06-07-2017
07:48 AM
Support Martrix for HDP 2.6 is JDK 1.8+ as below. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_support-matrices/content/ch_matrices-hdp.html Taable 2.2. HDP 2.6.0 JDK Support
JDKVersionOpen SourceJDK8†
OracleJDK 8
JDK7† , deprecated
JDK 7, deprecated
Please update exactly when you getting this error, it give more clarity ?. Up-grading the jave version is not the solution .
... View more
06-07-2017
04:22 AM
This issue occurring once in 3-4 days ays, below four log created in hbase log directory. Jun 3 00:16 gc.log-201705120309
Jun 3 00:19 gc.log-201706030016
Jun 3 00:45 gc.log-201706030019
Jun 4 13:43 gc.log-201706030045
Jun 4 17:10 gc.log-201706041343
Jun 7 12:18 gc.log-201706041710
... View more
06-05-2017
09:47 AM
HBase Master Maximum Memory : 32 GB HBase RegionServer Maximum Memory :20 GB
... View more
06-05-2017
01:11 AM
1) what this log for /var/log/hbase/gc.log-201706041710 ? 2) Error as below on Hbase log GC ? centos:/var/log/hbase # tailf gc.log-201706041710
2017-06-05T09:02:36.768+0800: 57127.541: [GC (Allocation Failure) 2017-06-05T09:02:36.769+0800: 57127.541: [ParNew: 600799K->49774K(629120K), 0.0073147 secs] 1457924K->907076K(2027264K), 0.0075249 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2017-06-05T09:02:40.990+0800: 57131.762: [GC (Allocation Failure) 2017-06-05T09:02:40.990+0800: 57131.762: [ParNew: 609006K->44417K(629120K), 0.0074685 secs] 1466308K->907097K(2027264K), 0.0076719 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2017-06-05T09:03:33.493+0800: 57184.265: [GC (Allocation Failure) 2017-06-05T09:03:33.493+0800: 57184.265: [ParNew: 603649K->40908K(629120K), 0.0089747 secs] 1466329K->903627K(2027264K), 0.0091741 secs] [Times: user=0.09 sys=0.01, real=0.01 secs]
2017-06-05T09:03:37.142+0800: 57187.915: [GC (Allocation Failure) 2017-06-05T09:03:37.142+0800: 57187.915: [ParNew: 600140K->46821K(629120K), 0.0105207 secs] 1462859K->915053K(2027264K), 0.0107302 secs] [Times: user=0.13 sys=0.00, real=0.01 secs]
2017-06-05T09:03:53.208+0800: 57203.980: [GC (Allocation Failure) 2017-06-05T09:03:53.208+0800: 57203.980: [ParNew: 606053K->46967K(629120K), 0.0062205 secs] 1474285K->915216K(2027264K), 0.0064273 secs] [Times: user=0.09 sys=0.01, real=0.01 secs]
2017-06-05T09:04:34.979+0800: 57245.752: [GC (Allocation Failure) 2017-06-05T09:04:34.979+0800: 57245.752: [ParNew: 606199K->43336K(629120K), 0.0098484 secs] 1474448K->919955K(2027264K), 0.0100675 secs] [Times: user=0.13 sys=0.00, real=0.01 secs]
2017-06-05T09:04:38.378+0800: 57249.150: [GC (Allocation Failure) 2017-06-05T09:04:38.378+0800: 57249.150: [ParNew: 602568K->40051K(629120K), 0.0076083 secs] 1479187K->922158K(2027264K), 0.0078408 secs] [Times: user=0.10 sys=0.00, real=0.01 secs]
2017-06-05T09:05:30.275+0800: 57301.047: [GC (Allocation Failure) 2017-06-05T09:05:30.275+0800: 57301.047: [ParNew: 599283K->32614K(629120K), 0.0057252 secs] 1481390K->914735K(2027264K), 0.0512496 secs] [Times: user=0.08 sys=0.00, real=0.05 secs]
2017-06-05T09:05:38.363+0800: 57309.135: [GC (Allocation Failure) 2017-06-05T09:05:38.363+0800: 57309.135: [ParNew: 591846K->35399K(629120K), 0.0083321 secs] 1473967K->922741K(2027264K), 0.0085566 secs] [Times: user=0.10 sys=0.01, real=0.00 secs]
... View more
Labels:
05-23-2017
07:39 PM
Server 1 :- 192.168.154.111 (centos)
Server 2 :- 192.168.154.113 (centos2)
1) First install the client on server from where you want to connect
Configure Repository and install client
[root@centos2 ~]# yum install mysql-client
2) My Sql software installed on centos2
[root@centos2 ~]# yum install mysql-server
3)Connected to Mysql on Centos2, Following user exist before creating Hive user and Database.
#mysql -u root -p password
mysql> Select host,user from mysql.user;
+-----------------+-------+
| host | user |
+-----------------+-------+
| localhost | root |
+-----------------+-------+
8 rows in set (0.00 sec)
Creating hive user and database
mysql> create user 'hive'@'192.168.154.111' identified by 'hive';
Query OK, 0 rows affected (0.05 sec)
mysql> create user 'hive'@'localhost' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)
mysql> create database hive;
Query OK, 1 row affected (0.00 sec)
mysql> grant ALL ON hive.* TO 'hive'@'192.168.154.111';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> flush hosts;
Query OK, 0 rows affected (0.00 sec)
mysql> grant ALL ON hive.* TO 'hive'@'localhost';
Query OK, 0 rows affected (0.00 sec)
mysql> Select host,user from mysql.user;
+-----------------+-------+
| host | user |
+-----------------+-------+
| % | oozie |
| 127.0.0.1 | oozie |
| 127.0.0.1 | root |
| 192.168.154.111 | hive |
| 192.168.154.111 | oozie |
| 192.168.154.113 | oozie |
| ::1 | root |
| localhost | hive |
| localhost | oozie |
| localhost | root |
+-----------------+-------+
10 rows in set (0.00 sec)
4) Verify you able to connect server MySQL server using the client ( Client Must be installed from which server you connecting), Verify user from ambari-server 192.168.154.111(Client) and 192.168.154.113(server) is MySQL Server IP address
[root@centos ~]# mysql -u hive -h 192.168.154.113 -p
Enter password:
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select user();
+----------------------+
| user() |
+----------------------+
| hive@centos.test.com |
+----------------------+
1 row in set (0.01 sec)
mysql> exit;
[root@centos ~]# 5) Above Client not able to connect to MY sql server , we need to grant permission as below.
mysql> grant ALL ON hive.* TO 'hive'@'%' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)
mysql> show grants for hive;
+-----------------------------------------------------------------------------------------------| Grants for hive@% -----------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'hive'@'%' IDENTIFIED BY PASSWORD '*4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC' |
| GRANT ALL PRIVILEGES ON `hive`.* TO 'hive'@'%' |+-------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
Once more verification from Ambari-server, is Hive username and password is working fine
[root@centos]#/usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive -passWord hive
Metastore connection URL: jdbc:mysql://centos2.test.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive
Starting metastore schema initialization to 0.13.0
Initialization script hive-schema-0.13.0.mysql.sql
Initialization script completed
schemaTool completeted
[root@centos ~]#
... View more
Labels:
05-01-2017
08:50 AM
Still we getting this error, have anyone faced same issue for HiverServer2. 2017-04-01 16:47:22,104 ERROR transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: Error validating the login [Caused by javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: passwd]
at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:109)
at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: passwd
at org.apache.hive.service.auth.PamAuthenticationProviderImpl.Authenticate(PamAuthenticationProviderImpl.java:46)
at org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(PlainSaslHelper.java:106)
at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:102)
... 8 more
... View more