Member since 
    
	
		
		
		12-09-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                115
            
            
                Posts
            
        
                43
            
            
                Kudos Received
            
        
                12
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 9519 | 07-10-2017 09:38 PM | |
| 6762 | 04-10-2017 03:24 PM | |
| 1860 | 03-04-2017 04:08 PM | |
| 6119 | 02-17-2017 10:42 PM | |
| 7442 | 02-17-2017 10:41 PM | 
			
    
	
		
		
		02-03-2017
	
		
		08:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 my bad..i didnot check syntax earlier...  [hive@master2 ~]$ kinit -k -t /etc/security/keytabs/hive.service.keytab hive/master2.chrsv.com@KERBEROS.COM
[hive@master2 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_501
Default principal: hive/master2.chrsv.com@KERBEROS.COM
Valid starting     Expires            Service principal
02/03/17 14:55:41  02/04/17 14:55:41  krbtgt/KERBEROS.COM@KERBEROS.COM
        renew until 02/03/17 14:55:41
[hive@master2 ~]$ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-03-2017
	
		
		08:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Ameet Paranjape   [hive@master2 ~]$ kinit -k -t /etc/security/keytabs/hive.service.keytab   kinit: Cannot determine realm for host (principal host/master2.chrsv.com@)  Not sure why it is not picking since all these were setup by Ambari...howevr when i do kadmin i can see the principle as  hive/master1.chrsv.com@KERBEROS.COM
hive/master2.chrsv.com@KERBEROS.COM
hive/worker1.chrsv.com@KERBEROS.COM
hive/worker2.chrsv.com@KERBEROS.COM 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-03-2017
	
		
		05:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I got Mysql which is metadata for oozie,hive,amabri. this was setup before cluster is setup. I do not see mysql as a service in hive home. I've enabled local MIT KDC and i see below in metastore.log...i donot see it anywhere it is being authenticated to kdc  2017-02-03 11:00:13,343 INFO  [main]: timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(82)) - Initializing Timeline metrics sink.
2017-02-03 11:00:13,345 INFO  [main]: timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(100)) - Identified hostname = master2.chrsv.com, serviceName = hivemetastore
2017-02-03 11:00:14,257 INFO  [main]: timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(118)) - Collector Uri: http://worker1.chrsv.com:6188/ws/v1/timeline/metrics
2017-02-03 11:00:14,592 INFO  [main]: impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(206)) - Sink timeline started
2017-02-03 11:00:15,133 INFO  [main]: impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) - Scheduled snapshot period at 10 second(s).
2017-02-03 11:00:15,133 INFO  [main]: impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - hivemetastore metrics system started
2017-02-03 11:00:15,938 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(667)) - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
2017-02-03 11:00:16,495 INFO  [main]: metastore.ObjectStore (ObjectStore.java:initializeHelper(370)) - ObjectStore, initialize called
2017-02-03 11:00:26,552 INFO  [main]: metastore.ObjectStore (ObjectStore.java:getPMF(474)) - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
2017-02-03 11:00:39,897 INFO  [main]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(138)) - Using direct SQL, underlying DB is MYSQL
2017-02-03 11:00:39,915 INFO  [main]: metastore.ObjectStore (ObjectStore.java:setConf(284)) - Initialized ObjectStore
2017-02-03 11:00:41,013 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles_core(741)) - Added admin role in metastore
2017-02-03 11:00:41,034 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:createDefaultRoles_core(750)) - Added public role in metastore
2017-02-03 11:00:41,131 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:addAdminUsers_core(790)) - No user is added in admin role, since config is empty
2017-02-03 11:00:41,139 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:init(525)) - Begin calculating metadata count metrics.
2017-02-03 11:00:41,233 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:init(527)) - Finished metadata count metrics: 1 databases, 0 tables, 0 partitions.
2017-02-03 11:00:42,847 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(6298)) - Starting DB backed MetaStore Server with SetUGI enabled
2017-02-03 11:00:42,861 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(6352)) - Started the new metaserver on port [9083]...
2017-02-03 11:00:42,862 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(6354)) - Options.minWorkerThreads = 200
2017-02-03 11:00:42,862 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(6356)) - Options.maxWorkerThreads = 100000
2017-02-03 11:00:42,862 INFO  [main]: metastore.HiveMetaStore (HiveMetaStore.java:startMetaStore(6358)) - TCP keepalive = true 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
			
    
	
		
		
		12-02-2016
	
		
		02:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I was able to create hive table on top of json files. below is the syntax i used to create external table..so i donot have to move data, all i need is to add partition  CREATE EXTERNAL
TABLE hdfs_audit(    access string,    agenthost string,    cliip string,    enforcer string,    event_count bigint,    event_dur_ms bigint,    evttime timestamp,    id string,    logtype string,    policy bigint,    reason string,    repo string,    repotype bigint,    requser string,    restype string,    resource string,    result bigint,    seq_num bigint)  PARTITIONED BY (    evt_time string)  ROW FORMAT SERDE    'org.apache.hive.hcatalog.data.JsonSerDe'  STORED AS
INPUTFORMAT    'org.apache.hadoop.mapred.TextInputFormat'  OUTPUTFORMAT   
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'  LOCATION  'hdfs://CLUSTERNAME/ranger/database/hdfs';  Add partition:  ALTER
TABLE ranger_audit.hdfs_audit ADD PARTITION (evt_time='20160601') LOCATION
'/ranger/audit/hdfs/20160601/hdfs/20160601'; 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-10-2016
	
		
		12:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I need some help to create hive table for the below format. this is ranger audit info, will create partitions accordingly. below info is just one line,   {"repoType":1,"repo":"abc_hadoop","reqUser":"ams","evtTime":"2016-09-19 13:14:40.197","access":"READ","resource":"/ambari-metrics-collector/hbase/data/hbase/meta/1588230740/info/ed3e52d8b86e4800801539fc4a7b1318","resType":"path","result":1,"policy":41,"reason":"/ambari-metrics-collector/hbase/data/hbase/meta/1588230740/info/ed3e52d8b86e4800801539fc4a7b1318","enforcer":"ranger-acl","cliIP":"123.129.390.140","agentHost":"hostname.sample.com","logType":"RangerAudit","id":"94143368-600c-44b9-a0c8-d906b4367537","seq_num":1240883,"event_count":1,"event_dur_ms":0} 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
- 
						
							
		
			Apache Ranger
			
    
	
		
		
		09-21-2016
	
		
		12:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Michael Young thank you for your response, but i do not want to save my password in plain text even for a short period of time.  thanks,  Raj 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-16-2016
	
		
		05:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 goal is to connect through beeline and pass password as a variable...it just works fine, but we can see the password in plain text if we do ps -ef|grep hive   
[root@sandbox ~]# beeline -u 'jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -n rajesh -w pass 
WARNING: Use "yarn jar" to launch YARN applications. 
Connecting to jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 
Connected to: Apache Hive (version 1.2.1000.2.5.0.0-817) 
Driver: Hive JDBC (version 1.2.1000.2.5.0.0-817) 
Transaction isolation: TRANSACTION_REPEATABLE_READ 
Beeline version 1.2.1000.2.5.0.0-817 by Apache Hive 
0: jdbc:hive2://sandbox.hortonworks.com:2181/>  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Hive
			
    
	
		
		
		08-31-2016
	
		
		01:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 save password in text file and connect as below:)  [root@sandbox ~]# beeline -u 'jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' -n rajesh -w pass  WARNING: Use "yarn jar" to launch YARN applications.  Connecting to jdbc:hive2://sandbox.hortonworks.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2  Connected to: Apache Hive (version 1.2.1000.2.5.0.0-817)  Driver: Hive JDBC (version 1.2.1000.2.5.0.0-817)  Transaction isolation: TRANSACTION_REPEATABLE_READ  Beeline version 1.2.1000.2.5.0.0-817 by Apache Hive  0: jdbc:hive2://sandbox.hortonworks.com:2181/> 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-25-2016
	
		
		06:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'm planing to access Hbase using Phoenix. I'm able to access it with no security. i mean it is not authenticating. I want the jdbc connection go though knox so it will authenticate against AD.  current url is jdbc:phoenix:abcdef.abc.com:2181:/hbase-unsecure 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Knox
- 
						
							
		
			Apache Phoenix
			
    
	
		
		
		07-27-2016
	
		
		02:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Kuldeep Kulkarni I got the same error message. only difference is my env is kerborized & both my rm's are not in standby mode.  [yarn@m1 root]$ yarn rmadmin -getServiceState rm1  standby  [yarn@m1 root]$ yarn rmadmin -getServiceState rm2  active  Ambari doesn't show the state of RM, but getting same exception as above. i tried to switch the roles and that didnot help. Any help is appreciated. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
         
					
				













