Member since
01-25-2019
75
Posts
10
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2434 | 02-25-2021 02:10 AM | |
1173 | 02-23-2021 11:31 PM | |
2238 | 02-18-2021 10:18 PM | |
3180 | 02-11-2021 10:08 PM | |
15878 | 02-01-2021 01:47 PM |
02-01-2021
01:27 PM
Hello @AHassan Did you try increasing the hive tez container memory and see if this fixes your issue or not. After connecting to beeline, try checking the hive tez container size first using the below command SET hive.tez.container.size; Once found, try increase the container size to twice the current size. Lets say if you have the container size as 5 GB, then try setting to 10GB and re-run the query. SET hive.tez.container.size=10240MB The reason I am asking to increase the container size is because I can see the attempts failing due to OOM. If still the above fails, tune the below. tez.runtime.io.sort.mb should not be more than 2 GB (ideally it should be 40% of the tez container size) tez.runtime.unordered.output.buffer.size-mb=1000 (ideally it should be 10% of the tez container size)
... View more
02-01-2021
01:16 PM
Hello @anujseeker The best option to submit queries to hive is to use HIVESERVER2 and not hive cli. Hive cli is deprecated. Coming to your main query as to you are unable to create database. You need check the hive cli logs present here (/tmp/<userid>/hive) to gather more info as why the path was not created and the Hive Metastore logs present here (/var/log/hive/). This is because hive cli (client) here talks directly to HMS and bypasses hiveserver2. If you could share the details from hive cli log and HMS logs, it would be easier to further guide you through the next steps. Regards, Tushar https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
... View more
01-26-2021
07:34 PM
Hey @saurabh707 Could you please try the below: Update hive log4j configs through CM Hive on Tez -> Configuration -> HiveServer2 Logging Advanced Configuration Snippet (Safety Valve) Hive -> Configuration -> Hive Metastore Server Logging Advanced Configuration Snippet (Safety Valve) Add the following to config: appender.DRFA.strategy.action.type=DELETE appender.DRFA.strategy.action.basepath=${log.dir} appender.DRFA.strategy.action.maxdepth=1 appender.DRFA.strategy.action.PathConditions.glob=${log.file}.* appender.DRFA.strategy.action.PathConditions.type=IfFileName appender.DRFA.strategy.action.PathConditions.nestedConditions.type=IfAccumulatedFileCount appender.DRFA.strategy.action.PathConditions.nestedConditions.exceeds=10 Let me know if the above addresses it.
... View more
11-08-2020
08:06 PM
@avlasenko It seems Impala is having trouble communicating with HMS. Could you please check if you are able to perform the same from Hive?
... View more
11-08-2020
07:15 PM
@ateka_18 As mentioned before as well, view is just a query statement in HMS. It's not actually a table. If you want to know the size of the table, below is the best approach. hdfs dfs -du -s -h /path/to/table Let's say you have a table named test(/user/hive/warehouse/default.db/test) and have several partitions under it (part1, part2, part3). To get the size of the table. hdfs dfs -du -s -h /user/hive/warehouse/default.db/test Let me know if the above helps.
... View more
11-07-2020
07:45 AM
@ateka_18 VIEW is just a query statement saved in HMS. So basically its the size of the table you gather and to do so, the best option is below. hdfs dfs -ls <path of the table> With regards to the error you are observing, it clearly says that analyze table is not supported for views. You can get the same for tables.
... View more
11-04-2020
08:51 AM
1 Kudo
Hey @banshidhar_saho Glad to hear it works. regarding permission, it depends upon acl permissions you give. In the previous comment I had updated rwx for group. You can set it something for "other" hdfs dfs -setfacl -m default:group:<group-name>:rwx <Path> hdfs dfs -setfacl -m default:other::- <path> Can you try the above and update if it works.
... View more
11-04-2020
06:54 AM
@pphot Yes hdfs acls will come into picture even if you use Impala. After all Impala is a client for hdfs service. If hdfs path has permissions, let's say no permission for impala user then impala will be unable to read data from hdfs and eventually your query will fail with permission denied error. Let me know if the above clarifies your doubt.
... View more
11-04-2020
06:51 AM
1 Kudo
@banshidhar_saho This is the expected behavior as in Hadoop there are no groups based on user name. It does not use the OS level groups, instead it inherits it from parent directory. The reason is that group resolution happens on NameNode where groups related to the user may not exist. To fix the case in your environment, modify the owner of the parent directory. This will cause new files to use the correct group. Run:-->hdfs dfs -chown -R username:groupname <PATH to DIR> This will recursively change the owner and group for the given <PATH>. (Note: do this with hdfs user) You can further use acls' to ensure the groups you are trying have access to it. hdfs dfs -setfacl -m default:group:<group-name>:rwx <Path> Let me know if this helps
... View more
11-04-2020
01:32 AM
@drgenious Could you please connect to impala-shell and submit the same query just to bee confirmed that the error is not from impala.
... View more