Member since
12-09-2015
115
Posts
43
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6339 | 07-10-2017 09:38 PM | |
3917 | 04-10-2017 03:24 PM | |
670 | 03-04-2017 04:08 PM | |
2325 | 02-17-2017 10:42 PM | |
3430 | 02-17-2017 10:41 PM |
05-29-2020
10:11 AM
Hi, has anyone get this running and can post an running example ? thx marcel
... View more
01-02-2018
11:42 AM
1 Kudo
@Raja Sekhar Chintalapati, The below command should do hdfs dfs -ls -R / | awk '{ if ( $3 == "spark" && substr($0,0,1) != "d" ) { print $8 } }' | xargs hdfs dfs -rm In the above command "spark" is the user name. Replace it with your username. Also I considered path as '/' . If you want to delete files only in a certain directory , replace it with your directory. This will remove only the files owned by the user and not the directories. Thanks, Aditya
... View more
08-10-2017
07:42 PM
@Raja Sekhar Chintalapati Here is your question already answered in our HCC forum
... View more
05-15-2018
08:21 AM
Hi, Is there a workaround solution for the above or is it a behaviour of 2.6.3 and above?
... View more
05-16-2017
08:40 PM
@Raja Sekhar Chintalapati
You can you just need to toggle the button see attached screenshot. Here you have 2 storage destination Solr and HDFS. To use solr the default option since HDP 2.5 you need to store the configuration in zookeeper at least 3 for production but one is enough for test purposes,so you setup before activating. Go to Ambari Ui-->Ranger-->Config-->Ranger Audit when you log on ranger after you can enable all the plugins from Go to Ambari Ui-->Ranger-->Config-->Ranger Plugin UI Cheers
... View more
08-14-2018
06:28 AM
What changes you had to do in /etc/hosts file on RegionServer/Master?
... View more
03-22-2017
11:46 AM
@pbarna i tested firefox and i did make the changes you specified. issue here is not about the error message, it is how can we make http authentication against ad credentials to access any UI. I was able to access UI if i login with local credentials, i want to get the same when i login with my ad credentials when i login to domain pc.
... View more
03-23-2017
08:30 PM
1 Kudo
You can login to resource manager url at port 8088 get the application Id specific to the submitted job,then run the following command to see the log info yarn logs -applicationId <application_id> | more
... View more
03-05-2017
07:51 PM
followed below document and able to see the status of NN & RM. http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Ambari_Security_Guide/content/_set_up_kerberos_for_ambari_server.html
... View more
03-06-2017
10:38 AM
You also must set in Firefox about:config network.auth.use-sspi = false to enable Kerberos. But most likely it still won't work because Windows doesn't know that your Oozie server etc. belong to another realm. Therefore install MIT Kerberos client for Windows, details how to install here, then copy krb5.conf from your cluster to "C:\Program Files\MIT\Kerberos\krb5.ini". Then, unlike in that article, change krb5.ini and set your default realm to your AD realm, and in the domain_realm section list all cluster master node FQDN's and set their realm to your HDP realm. After that restart your PC, and try to access Oozie Web UI. In the Kerberos Ticket Manager you can see which principals have been contacted, and that your cluster masters are in the right domain.
... View more
03-19-2018
06:45 AM
@Allen Wood Sorry you are going against the rules by publishing links to exam dumps. Please try to desist this practice in future.
... View more
03-05-2017
02:21 PM
This should have been automatically created for you if you entered CHRSV@COM in the "Additional Realms" box on the Configure Identities in the Enable Kerberos Wizard. Assuming that you didn't do this, how was the krb5.conf file set up to acknowledge the trusted realm?
... View more
03-03-2017
08:18 PM
centrify express is best option available & good part is it is free
... View more
02-21-2017
09:51 AM
Hi @Raja Sekhar Chintalapati, The documentation you mention is not specific to a MIT KDC. It is also perfectly valid when using an active directory. You just need to set hadoop.http.authentication.kerberos.principal and hadoop.http.authentication.cookie.domain according to your specific settings. The parameters are used to allow SPNEGO authentication when accessing some UIs of your cluster. Note that, in some cases, configuration on your browser might be required to use the tickets of your workstation. Example: https://community.hortonworks.com/articles/28537/user-authentication-from-windows-workstation-to-hd.html https://community.hortonworks.com/articles/76873/configure-mac-and-firefox-to-access-hdphdf-spnego.html Hope this helps.
... View more
02-21-2017
03:03 AM
@ Raja Sekhar Chintalapati Of course if you setup a KDC the keytabs are valid but you need to grab a valid one to proceed! Get the list of keytabs List all valid keytabs $ ls /etc/security/keytabs List valid principals for this keytab $ klist -kt /etc/security/keytabs/hive.service.keytab
Keytab name: FILE:/etc/security/keytabs/hive.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 02/02/17 23:00:12 hive/Ambari-Host_name@YOUR_REALM.COM
1 02/02/17 23:00:12 hive/Ambari-Host_name@YOUR_REALM.COM Grab a valid ticket $ kinit -kt /etc/security/keytabs/hive.service.keytab hive/Ambari-Host_name@YOUR_REALM.COM Check validity $ klist Ticket cache: FILE:/tmp/krb5cc_504
Default principal: hive/Ambari-Host_name@YOUR_REALM.COM
Valid starting Expires Service principal
02/10/17 01:32:45 02/11/17 01:32:45 krbtgt/YOUR_REALM.COM@YOUR_REALM.COM
renew until 02/10/17 01:32:45 Grab a valid ticket $ kinit -kt /etc/security/keytabs/hive.service.keytab hive/Ambari-Host_name@YOUR_REALM.COM This should have been the correct connect string if you had a valid ticket beeline -u jdbc:hive2://hiveServer2_hostname:10000;principal=hive/Keytab@PRINCIPAL With the above you should successfully log on and execute your HQL
... View more
02-18-2017
02:03 AM
You don't need any princs/keytabs for Oozie database, the only principal you need for Oozie is oozie/oozie_server_fqdn@REALM and it will be created by the Kerberos wizard you select to use.
... View more
02-08-2017
08:51 PM
Soon with the application lifetime feature (https://issues.apache.org/jira/browse/YARN-3813) you wouldn't have to write such scripts. You would simply set lifetime of the app to 20 mins during creation and it would kill itself on the 20th min mark. Until then, something like this script might help (assuming you have access to yarn command line). This script can be easily enhanced to run as a cronjob if required, where say it will trigger every 1 min and look for specific apps which has crossed certain lifetime and kill them. Hope this helps - #!/bin/bash
if [ "$#" -lt 2 ]; then
echo "Usage: $0 <app_id> <max_life_in_mins>"
exit 1
fi
finish_time=`yarn application -status $1 2>/dev/null | grep "Finish-Time" | awk '{print $NF}'`
if [ $finish_time -ne 0 ]; then
echo "App $1 is not running"
exit 1
fi
time_diff=`date +%s`-`yarn application -status $1 2>/dev/null | grep "Start-Time" | awk '{print $NF}' | sed 's!$!/1000!'`
time_diff_in_mins=`echo "("$time_diff")/60" | bc`
echo "App $1 is running for $time_diff_in_mins min(s)"
if [ $time_diff_in_mins -gt $2 ]; then
echo "Killing app $1"
echo yarn application -kill $1
else
echo "App $1 should continue to run"
fi
... View more
01-29-2018
09:02 PM
Remember If you have KAFKA : you need to change at config -> kafka brokers -> listeners back to PLAINTEXT://localhost:6667 (from PLAINTEXTSASL://localhost:6667)
... View more
03-10-2017
04:55 PM
Hello Having the same issue 😞 I find in the hive meta store log 2017-03-10 16:50:52,164 INFO [main]: zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=hive No idea where this is coming from though All tips appreciated!
Peter
... View more
02-17-2017
10:42 PM
this is because mysql is external to ambari and when kerberos is enabled ambari is not smart enough to recognize mysql and it didnot create keytabs for mysql. that was the reason hive was not able to start. i still need to find out a way to create keytabs for non ambari components. as of now i moved these components to another server where all the services were deployed through ambari. thanks to all for your help so far.
... View more
02-02-2017
06:10 PM
@SBandaru @Prabhu M question guys-- what is the best sequence to make env Kerborized Local KDC for SP & AD for UP with no SSL. As of now i got ENV up and running with no kerborisation.
... View more
01-30-2017
07:30 PM
Ambari should properly kerberize your cluster. Did you restart all the affected services after enabling kerberos?
... View more
12-02-2016
02:17 PM
1 Kudo
I was able to create hive table on top of json files. below is the syntax i used to create external table..so i donot have to move data, all i need is to add partition CREATE EXTERNAL
TABLE hdfs_audit( access string, agenthost string, cliip string, enforcer string, event_count bigint, event_dur_ms bigint, evttime timestamp, id string, logtype string, policy bigint, reason string, repo string, repotype bigint, requser string, restype string, resource string, result bigint, seq_num bigint) PARTITIONED BY ( evt_time string) ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' STORED AS
INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 'hdfs://CLUSTERNAME/ranger/database/hdfs'; Add partition: ALTER
TABLE ranger_audit.hdfs_audit ADD PARTITION (evt_time='20160601') LOCATION
'/ranger/audit/hdfs/20160601/hdfs/20160601';
... View more
11-05-2018
06:42 PM
@Raja Sekhar Chintalapati: Can you please tell me if you find solution to this problem
... View more
01-04-2017
08:39 PM
@Sergey Soldatov or @Raja Sekhar Chintalapati Do you know if this still is the case? Is there any plans for this in the future? Thanks! It looks like the jira mentioned here is resolved
... View more
07-20-2016
06:09 PM
@Kuldeep Kulkarni great stuff. I find myself getting this confused as well.
... View more
09-06-2017
10:58 AM
1 Kudo
Symptom Not able to use reflect function using beeline, but the query works OK with Hvi CLI. Error Message: Error while compiling statement: FAILED: SemanticException UDF reflect is not allowed (state=42000,code=40000) Cause When running set hive.server2.builtin.udf.blacklist from beeline, it will return the following as blacklisted: jdbc:hive2://localhost:10000/default> set hive.server2.builtin.udf.blacklist;
+------------------------------------------------------------------+--+
| set |
+------------------------------------------------------------------+--+
| hive.server2.builtin.udf.blacklist=reflect,reflect2,java_method |
+------------------------------------------------------------------+--+ Reflect UDF is blacklisted by default when running queries through HiveServer2 (beeline, ODBC, JDBC connections), as it was found to be a security risk. The code was modified so if the parameter hive.server2.builtin.udf.blacklist has not been configured or it is blank, its default value will be "reflect,reflect2,java_method". Resolution 1. Open the Ambari UI 2. Add the custom property in Ambari hive.server2.builtin.udf.blacklist under Hive / Configs / Advanced / Custom hive-site and give it any value, for example "empty_blacklist". 3. Restart services as requested by Ambari. 4. Connect again with beeline and verify that blacklist only includes the dummy value now. 0: jdbc:hive2://localhost:10000/default> set hive.server2.builtin.udf.blacklist; +-------------------------------------------+--+
| hive.server2.builtin.udf.blacklist=empty_blacklist |
+-------------------------------------------+--+
5. Reflect should work now without issues.
... View more
07-27-2016
02:10 PM
@Kuldeep Kulkarni I got the same error message. only difference is my env is kerborized & both my rm's are not in standby mode. [yarn@m1 root]$ yarn rmadmin -getServiceState rm1 standby [yarn@m1 root]$ yarn rmadmin -getServiceState rm2 active Ambari doesn't show the state of RM, but getting same exception as above. i tried to switch the roles and that didnot help. Any help is appreciated.
... View more
03-18-2016
09:10 PM
3 Kudos
it is an issue with meta data..we just dropped it and recreated the table and all is well...
... View more
02-05-2016
12:35 AM
@Neeraj Sabharwal yea...we did follow with support and they said it is a know issue...i posted the comments below..alll we need to do is change the DB engine for all the tables which are MyISAM to InnoDB. thanks for your response though..
... View more