Member since
03-07-2019
333
Posts
17
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
454 | 07-31-2024 11:57 AM | |
1757 | 03-27-2019 04:52 AM | |
6812 | 11-21-2018 10:21 PM | |
13281 | 09-14-2016 07:35 PM | |
11245 | 07-01-2016 06:56 PM |
07-01-2016
06:56 PM
@Johnny Fugers From what I understand is that you have set of files inside a hdfs directory which is split by dates, would the directory structure look like this: /user/test/data/2016-03-12/<FILES> /user/test/data/2016-03-13/<FILES> /user/test/data/2016-03-14/<FILES> Like these, if yes then you will not be able to create a partition based table, what you can do is to create a regular table pointing the location to /user/test/data and set the below properties, this will look at all the files inside the subdirectory and get the result set. set hive.input.dir.recursive=true; set hive.mapred.supports.subdirectories=true; set hive.supports.subdirectories=true;
set mapreduce.input.fileinputformat.input.dir.recursive=true;
... View more
06-07-2016
10:06 PM
@Tim Dobeck You can look at this http://gethue.com/hue-3-7-with-sentry-app-and-new-search-widgets-are-out/ which has the information of how to perform the install along with the tarball of 3.7
... View more
06-07-2016
04:22 PM
3 Kudos
@Sagar Shimpi By default the shell actions are not allowed to run as another user as sudo is blocked. If you want a yarn application to run as someone other than yarn (i.e. the submitter), then you need to enable the linux container executor so that the containers are started up by the submitting user. Also note the below setting information which also needs to be changed as well to achieve this. With yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users=false (default), it runs as yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user (default is 'nobody') With yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users=true, it runs as the user submitting the workflow. Stating that there are issues around this also where it does not work as expected because of the issues
https://issues.apache.org/jira/browse/YARN-2424 https://issues.apache.org/jira/browse/YARN-3462
The current suggestion that I can make is to add line to change the ownership of the file which was created using shell.
... View more
05-31-2016
08:38 PM
@sankar rao You may want to visit these links as they are documented about the use of different Authorizations. Hive Authorization Storage Based Authorization in the Metastore Server
... View more
05-12-2016
09:03 PM
@Sushil Saxena It seems like there are likely to be multiple versions of hive jars. When I state multiple version of hive jars meaning there different hdp release hive jars in the oozie share lib location. Was this a upgraded cluster from 2.1.x/2.2.x to 2.3.4 release?. One thing that I can suggest here is to perform the following provided you have not copied over thirdparty jars like oracle jar/ mysql jar file to the oozie share lib location. If you have then you would need to copy over all the 3rd party jars again to hdfs. From the oozie server host, using oozie user: 1. hdfs dfs -rm -r /user/oozie/share/lib -> to remove the lib folder completely 2. cd to /usr/hdp/current/oozie-client/bin 3. run "./oozie-setup.sh sharelib create -fs <get the fs.defaultFS from core-site.xml>" 4. Restart Oozie service This should help to address the issue.
... View more
05-12-2016
04:07 PM
@simran kaur, it seems like this is a duplicate of the post https://community.hortonworks.com/questions/30437/error-in-oozie-class-orgapacheoozieactionhadoopsqo.html#answer-33008 where you were able to get past the issue. If you do, please update the solution here and close this thread.
... View more
05-12-2016
04:05 PM
@simran kaur, There are couple of ways to address this. One as @rbiswas mentioned and the other one is to create a lib folder where the workflow is receding in HDFS and place the mysql connector jar file there. For example, if your workflow is located in hdfs at /user/simran/sqoopwf/workflow.xml. Take the path "/user/simran/sqoopwf/" and create a lib folder here hdfs dfs -mkdir -p /user/simran/sqoopwf/lib and then place the mysql connector jar to this location hdfs dfs -put <mysql-connector-java-version.jar> /user/simran/sqoopwf/lib/. Then kick off the oozie job which should work.
... View more
05-11-2016
11:06 PM
@Chris McGuire I'm not sure whether this would work on saveAsTable command since I have very limited to no knowledge on spark. I'm hoping that this property should work for the spark streaming job as well.
... View more
05-11-2016
09:29 PM
@Mamta Chawla, Hive does not do data validation based on the fields. Its the users responsibility to check the data and see if it matches with the table being created. May be this link can help you out with what you are looking for: https://community.hortonworks.com/articles/1283/hive-script-to-validate-tables-compare-one-with-an.html
... View more
- « Previous
- Next »