Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3365 | 05-03-2017 05:13 PM | |
2796 | 05-02-2017 08:38 AM | |
3076 | 05-02-2017 08:13 AM | |
3006 | 04-10-2017 10:51 PM | |
1517 | 03-28-2017 02:27 AM |
02-01-2017
02:13 PM
@Karan Alang you still need to provide explicit policy even though ranger for hbase is enabled, once you run explicit grant/revoke, it will be propogated to Ranger. Please see our doc http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/ch03s02s04s02.html all make sure your hbase service is configured correctly http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/ch03s02s04s02.html also double check ranger hbase plugin settings http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/hbase_plugin_kerberos.html
... View more
02-01-2017
02:08 PM
@Ashnee Sharma here you go http://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.3.1/bk_user-guide/content/activity_explorer.html go through each page, it says specifically the level of detail per component. In the next release, we're going to add cluster capacity projections and more extensive analysis of components. If this answers your question, please mark accept the answer as best.
... View more
02-01-2017
12:23 PM
Did you configure Ambari for kerberos, then restart Ambari as well? http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/ch_enable_spnego_auth_for_hadoop.html Also run service check on Ambari Metrics. I also recommend upgrading Ambari to 2.4.2 but resolve the immediate issue first.
... View more
02-01-2017
12:14 PM
Do you have all of the clients installed on the host? Check the host page and confirm. What is the output of the following command? hdp-select atlas
hdp-select ranger
... View more
02-01-2017
12:07 PM
You should review the following article, albeit it's for RHEL https://community.hortonworks.com/articles/40126/hdp-upgrade-using-reinstallation.html Also review our documentation for any SLES nuances http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-upgrade/content/preparing_to_upgrade_ambari.html We recommend upgrading to Ambari 2.4.2, no less. Ultimately, your your upgrade is a bit complicated and I would recommend you involve HWX professional services rather than going at it yourself.
... View more
02-01-2017
03:25 AM
@boyer if that answers your question, please accept it as best.
... View more
02-01-2017
03:21 AM
1 Kudo
@Vaibhav Kumar
recommendations from my colleagues are valid, you have strings in header row of your CSV documents. You can certainly filter by some known entity but there's a more advanced version of CSV Pig Loader called CSVExcelStorage. It is part of Piggybank library that comes bundled with HDP, hence the register command. You can pass different control parameters to it. Mortar blog is an excellent source of information on working with Pig http://help.mortardata.com/technologies/pig/csv. grunt> register /usr/hdp/current/pig-client/piggybank.jar;
grunt> a = load 'BJsales.csv' using org.apache.pig.piggybank.storage.CSVExcelStorage(',', 'NO_MULTILINE', 'NOCHANGE', 'SKIP_INPUT_HEADER') as (Num:Int,time:int,BJsales:float);
grunt> describe a;
a: {Num: int,time: int,BJsales: float}
grunt> b = limit a 5;
grunt> dump b;
output (1,1,200.1)
(2,2,199.5)
(3,3,199.4)
(4,4,198.9)
(5,5,199.0)
notice I am not filtering any relation, I'm telling the loader to skip header outright, it saves a few key strokes and doesn't waste any cycles processing anything extra.
... View more
02-01-2017
02:49 AM
@vamsi valiveti you need to escape parenthesis with double forward slashes grunt> a = load 'data' using PigStorage(',');
grunt> b = filter a by ($1 matches '{\\(\\)}');
2017-02-01 02:45:07,159 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_CHARARRAY 1 time(s).
grunt> dump b;
output Output(s):
Successfully stored 1 records (17 bytes) in: "hdfs://sandbox.hortonworks.com:8020/tmp/temp-1129941617/tmp-1428622787"
2017-02-01 02:49:30,801 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2017-02-01 02:49:30,811 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2017-02-01 02:49:30,811 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
(Gietz,{()})
... View more
02-01-2017
01:58 AM
absolutely @Dayou Zhou please accept it as best answer if it helped. Thanks
... View more
02-01-2017
01:57 AM
@Karan Alang you need to disable global allow policy and grant permissions per table, please review the section on hbase in our tutorial, it explains it well http://hortonworks.com/hadoop-tutorial/manage-security-policy-hive-hbase-knox-ranger/#hbase-grant-revoke if any of the answers helped, please close the thread by accepting best answer.
... View more