Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2731 | 10-18-2017 10:19 PM | |
3086 | 10-18-2017 09:51 PM | |
12255 | 09-21-2017 01:35 PM | |
1009 | 08-04-2017 02:00 PM | |
1311 | 07-31-2017 03:02 PM |
10-05-2016
07:04 PM
2 Kudos
@Peter Coates Views in Hive today are purely logical. That means there is no physical data laying around (at least not today) for a view. So when you create a view, all you are really doing is making it easier to write future queries on top of that view or in your case creating views to help with compliance and policies. Once a view is created, you can create access policies for that view on who should have access to the view. This is in addition to the policies you may have at table level. Of course if someone have access to T1 and T2 then restricting View permissions is quite meaning less. In short, no data is laying around for a view after a query completes (almost). I have seen a scenario where temp files created by hive during a query were not being deleted due to query failure. Check this link. Following link should answer your question in more details. https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization
... View more
10-05-2016
06:43 PM
2 Kudos
@Chris Colvin Please see the following link on how to use Ranger to give access to user admin (or a group admin belongs to). To make it work, I'll start with giving all access and then start restricting it and see how the behavior changes so the user has most restrictive access but still able to do the job. https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+0.5+-+User+Guide#ApacheRanger0.5-UserGuide-AddingHIVEpolicies
... View more
09-30-2016
04:50 AM
@hitaay Can you please elaborate your question. you can simply use HDFS Snapshots to create point in time backups. Here is a link on snapshots. If this is not what you are looking for, can you please elaborate?
... View more
09-29-2016
03:45 PM
@Eric Periard You cannot just compress pre-existing data by simply enabling compression. You woul have to compress existing data which will generate new compressed files. It is my understanding that you cannot compress existing data in place. The way to do this is to compress the existing data which will create new compressed files and then delete the uncompressed data/original files.
... View more
09-26-2016
04:19 PM
@Landon Robinson Is your query wrapped in quotes? If not, can you try that (single quotes is fine)? If you wrap it in double quotes then you will need to use "\$CONDITIONS" instead of just "$CONDITIONS" so that your shell doesn't use it as a shell variable.
... View more
09-21-2016
02:10 PM
1 Kudo
@Arkaprova Saha The reason I think you are not able to import data in Parquet format is because Parquet is not supported by this driver. Please see the following link for supported formats which include Avro and RCfile beside other formats but does not include Parquet (check under section 1.2.1). https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.6/bk_HortonworksConnectorForTeradata/content/ch_HortonworksConnectorForTeradata.html
... View more
09-19-2016
01:49 PM
@ssainath This is usually a pom issue where it's missing some jar. I am assuming that's not your case. How do you run the app? do you run it using "java -jar ....." or "hadoop jar...." command? You should be running it as "hadoop jar.....".
... View more
09-16-2016
06:09 AM
Yes. That's how I did it. I am pretty confident that it should work.
... View more
09-15-2016
11:19 PM
1 Kudo
@David Bozentka Almost three years ago I use to run a cluster on a USB drive with five VMs. Not only that I didn't run into any issue, I remember sharing images with my coworkers. Is that how you are trying to run using Virtual box? If yes, it should work.
... View more
09-14-2016
07:28 PM
@Josh Persinger May be someone else can confirm but I think you are storing the data as text format so it is storing it as a string. When loading the data in memory it will be loaded as decimal. The on disk representation varies based on file format. Try creating data in Avro or ORC format and see if it retains the data type as Decimal.
... View more