Member since
02-25-2016
72
Posts
34
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3655 | 07-28-2017 10:51 AM | |
2833 | 05-08-2017 03:11 PM | |
1191 | 04-03-2017 07:38 PM | |
2898 | 03-21-2017 06:56 PM | |
1186 | 02-09-2017 08:28 PM |
10-10-2017
01:13 PM
1 Kudo
Hi Team, I was trying to load dataframe to hive table, bucket by one of the column. I am facing error. File "<stdin>", line 1, in <module> AttributeError: 'DataFrameWriter' object has no attribute 'bucketBy' Here is the statement I am trying to pass rs.write.bucketBy(4,"Column1").sortBy("column2").saveAsTable("database.table") Can you please help me out in this
... View more
Labels:
- Labels:
-
Apache Spark
08-30-2017
07:01 PM
1 Kudo
Hello Team, Can some one please help me understand the comparison/difference between Hive CLI and beeline.
... View more
Labels:
- Labels:
-
Apache Hive
07-28-2017
11:04 AM
1 Kudo
We generally encounter such errors when the delimiter specified in command doesn't match the delimiter in input file. Also make sure you are giving complete and right path of file Please try below syntax load '/path_to_file' using PigStorage('|') as (aa,bb,cc,dd,ee);
... View more
07-28-2017
10:51 AM
1 Kudo
Time taking for Query execution depends on multiple factors 1. Mainly the Hive query design, joins and the columns being pulled 2. YARN/TEZ container size allocated, depends where you are running 3. Check the queue you are running your job, check if queue is free to answer your question on why one of the reducer is taking 1000 tasks please the hive.exec.reducers.max value defined If you want to play and modify the number of reducers, try changing the value of hive.exec.reducers.bytes.per.reducer(preferably assign a smaller, as this value is inversely proportional to number of reducers)
... View more
07-21-2017
09:19 PM
1 Kudo
@Varun Please see the below to control number of Reducers setting MAPRED.REDUCE.TASKS = -1 -- this property lets Tez determine the no of reducers to initiate hive.tez.auto.reducer.parallelism = true; --this property is enabled to TRUE, hive will estimate data sizes and set parallelism estimates. Tez will sample source vertices, output sizes and adjust the estimates at run time
this is the 1st property that determines initial number of reducers once Tez starts the query hive.tex.min.partition.factor =0.25;-- when auto parallelism enable, this property will be used to put a lower limit to number of reducers that Tez specified 1. hive.tez.max.partition.factor - 2.0; -- this property specifies,over-partition data in shuffle edges 2.hive.exec.reducers.max by default is 1099 --max number of reducers 3.hive.exec.reducers.bytes.per.reducer = 256 MB; which is 268435456 bytes Now to calculate the number of reducers we will need to put altogether, along with this formula
also from Explain plan we will need to get the size of output, lets assume 200,000 bytes Max(1, Min(hive.exec.reducers.max [1099], Reducer Stage estimate/hive.exec.reducers.bytes.per.reducer)) x hive.tez.max.partition.factor [2]
Max(1, Min(1099, 200000/268435456)) x 2 =MAX(1,min(1099,0.00074505805)) X 2 =MAX(1,0.0007) X 2 = 1 X 2
= 2
Tez will spawn 2 Reducers. In this case we can legally make Tez initiate higher number of reducers by modifying value of hive.exec.reducers.bytes.per.reducer
by setting it to 20 KB =Max(1,min(1099,20000/10432)) X 2 =Max(1,19) X 2 = 38 Please note higher number of reducers doesn't mean better performance
... View more
07-21-2017
08:44 PM
1 Kudo
@Varun R Optimization varies in every case. Depends on incoming data, file size. In general please use these setting for fine tuning Enable predicate pushdown (PPD) to filter at the storage layer: SET hive.optimize.ppd=true; SET hive.optimize.ppd.storage=true Vectorized query execution processes data in batches of 1024 rows instead of one by one: SET hive.vectorized.execution.enabled=true; SET hive.vectorized.execution.reduce.enabled=true; Enable the Cost Based Optimizer (COB) for efficient query execution based on cost and fetch table statistics: SET hive.cbo.enable=true; SET hive.compute.query.using.stats=true; SET hive.stats.fetch.column.stats=true; SET hive.stats.fetch.partition.stats=true; Partition and column statistics from fetched from the metastsore. Use this with caution. If you have too many partitions and/or columns, this could degrade performance. Control reducer output: SET hive.tez.auto.reducer.parallelism=true; Partition table based on necessary column, also bucket the tables(wisely identify the column) Also depends on how you want to tune your Query, based on Explain Plan. Please check number of Mappers and Reducers spawnned.
... View more
07-21-2017
06:33 PM
2 Kudos
@Simran Kaur I see in stdout as Oozie launcher failed. Are you trying to run Hive action in Oozie. If that's the case please use command (yarn logs -applicationId application_1499692338187_45811)to get logs, or follow the below KB article to trace for logs to debug further https://community.hortonworks.com/articles/9148/troubleshooting-an-oozie-flow.html
... View more
07-21-2017
06:14 PM
2 Kudos
@Helmi Khalifa Please use below snytax to load data from hdfs to hive tables LOAD DATA INPATH '/hdfs/path' OVERWRITE INTO TABLE TABLE_NAME; In case if you are trying to load to a specific partition of the table LOAD DATA INPATH '/hdfs/path' OVERWRITE INTO TABLE TABLE_NAME PARTITION (ds='2008-08-15');
... View more
07-14-2017
06:19 PM
2 Kudos
For every Reducer certain number of tasks are created. Can someone explain what is the factor which decides number of tasks to be created for each reducer
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
06-23-2017
02:05 AM
get hdfs path where hive table files are stored. Use hdfs dfs -du -s -h /hdfs_path to get size in readable format.
... View more