Member since
04-11-2016
535
Posts
148
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7438 | 09-17-2018 06:33 AM | |
1801 | 08-29-2018 07:48 AM | |
2703 | 08-28-2018 12:38 PM | |
2100 | 08-03-2018 05:42 AM | |
1963 | 07-27-2018 04:00 PM |
09-24-2024
08:04 AM
I believe there is a better way to do it from hue config than changing the python code: > Navigate to HUE > Configuration > Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini > add == [beeswax] max_catalog_sql_entries=15000 == If you need to list 15000 entries at once instead of 5k which is by default. > Save and restart affected Note that, you can't expect optimal performance out of HUE UI to load table list since no is too high and this is not a hue limitation rather being put on purpose so that table list load can be faster.
... View more
04-14-2024
10:58 PM
@Richardxu18, as this is an older article, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this article as a reference in your new post.
... View more
08-19-2022
04:26 AM
Hello @ssubhas , the above worked however, when we try the same with LazySerde, it is able to escape the delimiter but loads few NULL values at the end. PFB snippet of statement I used: CREATE TABLE test1(5columns string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES( 'separatorChar'='|', 'escapeChar'='\\' ) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'; NOTE: also tried field.delim=|, format.serialization=|. It works when serde properties are not mentioned and we use escape by Clause as you suggested, any way to make it work with LazySerde as well? (Data is Pipe delimited & may also have pipe in the data). Please suggest and help.
... View more
02-15-2022
08:00 AM
Hi @CN As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
01-24-2022
02:38 AM
Hi, when I run the hive query it showing the below error Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask But this error is not showing all the time it got succeed with some of the users some times it got failed. Could you please suggest the reason and how to overcome this. need urgent. could you please help us.
... View more
01-19-2021
03:38 AM
Thank you So much Subha, It worked like magic.
... View more
10-22-2020
04:59 AM
I did as @ssubhas said, setting the attributes to false. spark.sql("SET hive.enforce.bucketing=false")
spark.sql("SET hive.enforce.sorting=false")
spark.sql("SET spark.hadoop.hive.exec.dynamic.partition = true")
spark.sql("SET spark.hadoop.hive.exec.dynamic.partition.mode = nonstrict")
newPartitionsDF.write.mode(SaveMode.Append).format("hive").insertInto(this.destinationDBdotTableName) Spark can create the bucketed table in Hive with no issues. Spark inserted the data into the table, but it totally ignored the fact that the table is bucketed. So when I open a partition, I see only 1 file. When inserting, we should set hive.enforce.bucketing = true, not false. And you will face the following error in Spark logs. org.apache.spark.sql.AnalysisException: Output Hive table `hive_test_db`.`test_bucketing` is bucketed but Spark currently does NOT populate bucketed output which is compatible with Hive.; This means that Spark doesn't support insertion into bucketed Hive tables. The first answer in this Stackoverflow question, explains that what @ssubhas suggested is a workaround that doesn't guarantee bucketing.
... View more
07-30-2020
08:55 AM
1 Kudo
brutal I know but a oneliner cd $(cat /etc/ambari-server/conf/ambari.properties | grep -i mpack|awk -F'=' '{print$2}') ; ls -l|grep -v cache |grep -v mpacks_replay.log |grep -v total |awk '{print$9}' |xargs The last bit is handy if you to create a ruby fact out of the data
... View more
06-07-2020
11:36 PM
@oudaysaada As this is an older post you would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
05-02-2020
09:35 AM
@ssubhas This did not work as well. Can you help me out. I am unable to connect to HIVE SERVICE in Putty.
... View more