Member since
09-16-2021
404
Posts
54
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
549 | 05-22-2025 03:00 AM | |
322 | 05-19-2025 03:02 AM | |
306 | 04-23-2025 12:46 AM | |
279 | 03-28-2025 03:47 AM | |
1093 | 02-10-2025 08:58 AM |
06-02-2025
06:55 AM
It does looks like query failed with ClassCastException. It indicates that ( org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to org.apache.hadoop.io.LongWritable ) a mismatch between the data type Hive expects and the data type it's actually encountering while processing the query. From the Error trace , Hive treats a value as a DECIMAL(HiveDecimalWritable) but the metadata seems to be Long(LongWritable). One possible Reason might be Schema Mismatch: Hive table schema defines a column but the underlying data file (e.g., Parquet, ORC, ...) for that column actually contains DECIMAL Values. To validate , Run DESCRIBE FORMATTED <your_table_name>; for the table involved in the failing query. Pay close attention to the data types of all columns, especially those that might be involved in the conversion. Compare these Hive schema data types with the actual data types in your source data files. For example, if you're using Parquet, use tools like parquet-tools to inspect the schema of Parquet files. if you're using ORC , use hive --orcfiledump to inspect the schema of orc files. Also make sure that Serde's pointing to valid underlying file formats.
... View more
05-26-2025
06:45 AM
Yes, your understanding is correct
... View more
05-26-2025
05:06 AM
Impala stats are stored in Backend DB metadata which is getting propagated from HS2 to Tez dag plan. That's the reason. Above config helps to skip those impala stats. hive.plan.mapwork.serialization.skip.properties=impala_intermediate_stats_chunk.*
... View more
05-22-2025
03:00 AM
1 Kudo
Validate the explain plan using explain extended <query> , if the explain plan contains "impala_intermediate_stats_chunk" . set the following session-level property to run the impacted query: hive.plan.mapwork.serialization.skip.properties=impala_intermediate_stats_chunk.* When setting these property, you may encounter an error: Cannot modify hive.plan.mapwork.serialization.skip.properties at runtime. It is not in the list of parameters that are allowed to be modified at runtime (state=42000, code=1) To avoid this issue, whitelist the parameter and restart HS2 before retrying the query setting. hive.security.authorization.sqlstd.confwhitelist.append=hive\.plan\.mapwork\.serialization\.skip\.properties
... View more
05-19-2025
03:10 AM
Would like to mention few more recommendations , Hive will be fast with columnar storage and predicate pushdown. Store the table as ORC (with Snappy/Zlib) if possible . Ref - https://docs.cloudera.com/runtime/7.2.0/hive-performance-tuning/topics/hive_prepare_to_tune_performance.html#:~:text=,vectorized%20by%20examining%20explain%20plans Collect statistics and enable predicate push-down (hive.optimize.ppd=true, default in Hive recent versions) so that filtering on code skips irrelevant data. If code column has limited distinct values, consider partitioning or bucketing on it: a partitioned ORC table will read only the needed partition. Also keep vectorization enabled (hive.vectorized.execution.enabled=true), which processes rows in batches – a big speedup for scans.
... View more
05-19-2025
03:02 AM
Hive is not a low-latency OLTP Database like Oracle. Hive is designed for batch processing, not fast single-row lookups. Every Select you run triggers a full query execution plan. From the code snippet observed , queries executing row by row. (i.e.) executeQuery() multiple times , it looks expensive. hive.fetch.task.conversion won't help here, since it will be useful for optimizing simple SELECT's into client-side fetches, but Hive still builds a full plan behind the scenes . Better approach would be , Refactor the loop into a single IN clause. SELECT code, surname, name FROM mytable WHERE code IN ('ABC123', 'FLS163', 'XYZ001', ...) Then store the results in a map. Map<String, String> codeToName = new HashMap<>();
while (r.next()) {
codeToName.put(r.getString("code"), r.getString("surname") + " " + r.getString("name"));
} Even if you must process row-by-row , fetching all data in a batch drastically reduces query overhead. If the list is too large for IN clause, insert those values in temp Hive table. // Insert your id list into a temp table
CREATE TEMPORARY TABLE tmp_ids (code STRING);
-- Then insert all your codes into tmp_ids
SELECT a.code, a.surname, a.name
FROM mytable a
JOIN tmp_ids b ON a.code = b.code; Hive optimize the join rather than executing multiple separate queries.
... View more
04-23-2025
05:49 AM
please try the same in hive, i haven't validated from IMPALA end.
... View more
04-23-2025
12:46 AM
When using the -f flag with Beeline to execute an HQL script file, Beeline follows a fail-fast approach. If any single SQL statement fails, Beeline will stop execution immediately and exit with an error. This behavior helps ensure data integrity , it avoids executing dependent or subsequent statements that may rely on a failed operation (e.g., missing table, failed insert). Beeline does not support built-in error handling or continuation of script execution after a failed statement within the same script file. To continue executing remaining statements even if one fails: Use a Bash (or similar) wrapper script to: Loop over each statement/block. Execute each one using a separate beeline -e "<statement>" Check the exit code after each call. Log errors to a file (e.g., stderr, exit code, or custom messages). Proceed to the next statement regardless of the previous outcome. Sample BASH Script [root@node4 ~]# cat run_hive_statements.sh
#!/bin/bash
# Set the HQL file to read SQL-like Hive queries from
HQL_FILE="sample_hql_success_failure.hql"
# Log file for capturing errors or output, named with the current date
LOG_FILE="error_log_$(date +%Y%m%d).log"
# Read the contents of the HQL file into a variable
HQL_CONTENT=$(cat "$HQL_FILE")
# Preprocess the file: ensure each SQL statement ends with a newline after semicolon
STATEMENTS=$(sed 's/;/;\n/g' "$HQL_FILE")
# Loop through each SQL-like statement (semicolon-separated)
for STATEMENT in "${STATEMENTS[@]}"; do
# Clean up the statement: remove empty lines and comments
SQL_SCRIPT=$(echo "$STATEMENT" | sed '/^\s*$/d' | sed '/^--.*$/d')
# If the cleaned SQL statement is not empty, proceed
if [ -n "$SQL_SCRIPT" ]; then
# Split potential multiline statements using awk with record separator as ';'
echo "$SQL_SCRIPT" | awk '
BEGIN { RS=";" }
{
gsub(/^[ \t\r\n]+|[ \t\r\n]+$/, "", $0); # Trim leading/trailing whitespace
if(length($0)) print $0 ";"
}
' | while read -r STATEMENT; do
# Execute each final cleaned and trimmed SQL statement
echo "Executing: $STATEMENT"
beeline -n hive -p hive -e "$STATEMENT" >> "$LOG_FILE" 2>&1
EXIT_CODE=$?
if [ "$EXIT_CODE" -ne 0 ]; then
echo "Error executing statement:"
echo "$FINAL_STATEMENT;" >> "$LOG_FILE"
echo "Beeline exited with code: $EXIT_CODE" >> "$LOG_FILE"
fi
done
fi
done
[root@node4 ~]# Sample HQL file [root@node4 ~]# cat sample_hql_success_failure.hql
-- This script demonstrates both successful and failing INSERT statements
-- Successful INSERT
CREATE TABLE IF NOT EXISTS successful_table (id INT, value STRING);
INSERT INTO successful_table VALUES (1, 'success1');
INSERT INTO successful_table VALUES (2, 'success2');
SELECT * FROM successful_table;
-- Intentionally failing INSERT (ParseException)
CREATE TABLE IF NOT EXISTS failing_table (id INT, value INT);
INSERT INTO failing_table VALUES (1, 'this_will_fail);
SELECT * FROM failing_table;
-- Another successful INSERT (will only run if the calling script continues)
CREATE TABLE IF NOT EXISTS another_successful_table (id INT, data STRING);
INSERT INTO another_successful_table VALUES (101, 'continued_data');
SELECT * FROM another_successful_table;
-- Intentionally failing INSERT (table does not exist)
INSERT INTO non_existent_table SELECT * FROM successful_table;
[root@node4 ~]# Sample ERROR log created as part of the script. [root@node4 ~]# grep -iw "error" error_log_20250423.log
Error: Error while compiling statement: FAILED: ParseException line 1:54 character '<EOF>' not supported here (state=42000,code=40000)
Error: Error while compiling statement: FAILED: SemanticException [Error 10001]: Line 1:12 Table not found 'non_existent_table' (state=42S02,code=10001)
[root@node4 ~]#
... View more
04-22-2025
10:46 PM
Mentioned query also running in latest CDP versions. ( i.e. CDP 7.1.9 SP1 / CDP 7.3.1 ). Sample results for your reference. 0: jdbc:hive2://node2.playground-ggangadharan> SELECT * FROM MAIN_CALLS CALLS
. . . . . . . . . . . . . . . . . . . . . . .> where (BSC in (4,5)
. . . . . . . . . . . . . . . . . . . . . . .> or exists (select 1 from TEMP_TABLE_CELL tt where tt.bsc = bsc and tt.cell_primary = cell_primary));
INFO : Compiling command(queryId=hive_20250423054054_dd36fbb3-7367-45de-9f98-56428382697d): SELECT * FROM MAIN_CALLS CALLS
where (BSC in (4,5)
or exists (select 1 from TEMP_TABLE_CELL tt where tt.bsc = bsc and tt.cell_primary = cell_primary))
INFO : Warning: Map Join MAPJOIN[17][bigTable=?] in task 'Map 1' is a cross product
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:[FieldSchema(name:calls.call_unique_id, type:bigint, comment:null), FieldSchema(name:calls.imsi, type:bigint, comment:null), FieldSchema(name:calls.imei, type:bigint, comment:null), FieldSchema(name:calls.bsc, type:bigint, comment:null), FieldSchema(name:calls.cell_primary, type:bigint, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20250423054054_dd36fbb3-7367-45de-9f98-56428382697d); Time taken: 1.262 seconds
INFO : Executing command(queryId=hive_20250423054054_dd36fbb3-7367-45de-9f98-56428382697d): SELECT * FROM MAIN_CALLS CALLS
where (BSC in (4,5)
or exists (select 1 from TEMP_TABLE_CELL tt where tt.bsc = bsc and tt.cell_primary = cell_primary))
INFO : Query ID = hive_20250423054054_dd36fbb3-7367-45de-9f98-56428382697d
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Subscribed to counters: [] for queryId: hive_20250423054054_dd36fbb3-7367-45de-9f98-56428382697d
INFO : Session is already open
INFO : Dag name: SELECT * FROM MAIN_CALLS CA...cell_primary)) (Stage-1)
INFO : Setting tez.task.scale.memory.reserve-fraction to 0.30000001192092896
INFO : HS2 Host: [node4.playground-ggangadharan.coelab.cloudera.com], Query ID: [hive_20250423054054_dd36fbb3-7367-45de-9f98-56428382697d], Dag ID: [null], DAG Session ID: [null]
INFO : Status: Running (Executing on YARN cluster with App id application_1745308680860_0001)
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 2 .......... container SUCCEEDED 1 1 0 0 0 0
Reducer 3 ...... container SUCCEEDED 2 2 0 0 0 0
Map 1 .......... container SUCCEEDED 1 1 0 0 0 0
----------------------------------------------------------------------------------------------
VERTICES: 03/03 [==========================>>] 100% ELAPSED TIME: 17.64 s
----------------------------------------------------------------------------------------------
INFO : Status: DAG finished successfully in 17.40 seconds
INFO : DAG ID: null
INFO :
INFO : Query Execution Summary
INFO : ----------------------------------------------------------------------------------------------
INFO : OPERATION DURATION
INFO : ----------------------------------------------------------------------------------------------
INFO : Compile Query 1.26s
INFO : Prepare Plan 0.23s
INFO : Get Query Coordinator (AM) 0.01s
INFO : Submit Plan 0.18s
INFO : Start DAG 0.21s
INFO : Run DAG 17.40s
INFO : ----------------------------------------------------------------------------------------------
INFO :
INFO : Task Execution Summary
INFO : ----------------------------------------------------------------------------------------------
INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms) INPUT_RECORDS OUTPUT_RECORDS
INFO : ----------------------------------------------------------------------------------------------
INFO : Map 1 14820.00 8,070 56 6 0
INFO : Map 2 3066.00 4,800 59 4 1
INFO : Reducer 3 1.00 510 0 1 2
INFO : ----------------------------------------------------------------------------------------------
INFO : FileSystem Counters Summary
INFO :
INFO : Scheme: FILE
INFO : ----------------------------------------------------------------------------------------------
INFO : VERTICES BYTES_READ READ_OPS LARGE_READ_OPS BYTES_WRITTEN WRITE_OPS
INFO : ----------------------------------------------------------------------------------------------
INFO : Map 1 0B 0 0 0B 0
INFO : Map 2 112B 0 0 81B 0
INFO : Reducer 3 81B 0 0 108B 0
INFO : ----------------------------------------------------------------------------------------------
INFO :
INFO : Scheme: HDFS
INFO : ----------------------------------------------------------------------------------------------
INFO : VERTICES BYTES_READ READ_OPS LARGE_READ_OPS BYTES_WRITTEN WRITE_OPS
INFO : ----------------------------------------------------------------------------------------------
INFO : Map 1 1.23KB 4 0 372B 2
INFO : Map 2 894B 2 0 0B 0
INFO : Reducer 3 0B 0 0 0B 0
INFO : ----------------------------------------------------------------------------------------------
INFO :
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_SUCCEEDED_TASKS: 4
INFO : TOTAL_LAUNCHED_TASKS: 4
INFO : DATA_LOCAL_TASKS: 2
INFO : AM_CPU_MILLISECONDS: 3160
INFO : AM_GC_TIME_MILLIS: 0
INFO : INITIAL_HELD_CONTAINERS: 0
INFO : TOTAL_CONTAINERS_USED: 2
INFO : TOTAL_CONTAINER_ALLOCATION_COUNT: 2
INFO : TOTAL_CONTAINER_LAUNCH_COUNT: 2
INFO : TOTAL_CONTAINER_REUSE_COUNT: 2
INFO : File System Counters:
INFO : FILE_BYTES_READ: 193
INFO : FILE_BYTES_WRITTEN: 189
INFO : HDFS_BYTES_READ: 2124
INFO : HDFS_BYTES_WRITTEN: 372
INFO : HDFS_READ_OPS: 6
INFO : HDFS_WRITE_OPS: 2
INFO : HDFS_OP_CREATE: 1
INFO : HDFS_OP_GET_FILE_STATUS: 2
INFO : HDFS_OP_OPEN: 4
INFO : HDFS_OP_RENAME: 1
INFO : org.apache.tez.common.counters.TaskCounter:
INFO : REDUCE_INPUT_GROUPS: 1
INFO : REDUCE_INPUT_RECORDS: 1
INFO : COMBINE_INPUT_RECORDS: 0
INFO : SPILLED_RECORDS: 2
INFO : NUM_SHUFFLED_INPUTS: 2
INFO : NUM_SKIPPED_INPUTS: 1
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : MERGED_MAP_OUTPUTS: 1
INFO : GC_TIME_MILLIS: 115
INFO : TASK_DURATION_MILLIS: 17839
INFO : CPU_MILLISECONDS: 13380
INFO : PHYSICAL_MEMORY_BYTES: 8491368448
INFO : VIRTUAL_MEMORY_BYTES: 21970862080
INFO : COMMITTED_HEAP_BYTES: 8491368448
INFO : INPUT_RECORDS_PROCESSED: 3
INFO : INPUT_SPLIT_LENGTH_BYTES: 806
INFO : OUTPUT_RECORDS: 2
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_BYTES: 5
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 21
INFO : OUTPUT_BYTES_PHYSICAL: 73
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILLS_BYTES_READ: 25
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : SHUFFLE_CHUNK_COUNT: 1
INFO : SHUFFLE_BYTES: 49
INFO : SHUFFLE_BYTES_DECOMPRESSED: 21
INFO : SHUFFLE_BYTES_TO_MEM: 24
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_DISK_DIRECT: 25
INFO : NUM_MEM_TO_DISK_MERGES: 0
INFO : NUM_DISK_TO_DISK_MERGES: 0
INFO : SHUFFLE_PHASE_TIME: 13355
INFO : MERGE_PHASE_TIME: 60
INFO : FIRST_EVENT_RECEIVED: 13057
INFO : LAST_EVENT_RECEIVED: 13159
INFO : DATA_BYTES_VIA_EVENT: 0
INFO : HIVE:
INFO : CREATED_FILES: 1
INFO : DESERIALIZE_ERRORS: 0
INFO : RECORDS_IN_Map_1: 5
INFO : RECORDS_IN_Map_2: 4
INFO : RECORDS_OUT_0: 5
INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 0
INFO : RECORDS_OUT_INTERMEDIATE_Map_2: 1
INFO : RECORDS_OUT_INTERMEDIATE_Reducer_3: 2
INFO : RECORDS_OUT_OPERATOR_FIL_19: 4
INFO : RECORDS_OUT_OPERATOR_FIL_27: 5
INFO : RECORDS_OUT_OPERATOR_FS_29: 5
INFO : RECORDS_OUT_OPERATOR_GBY_21: 1
INFO : RECORDS_OUT_OPERATOR_GBY_23: 1
INFO : RECORDS_OUT_OPERATOR_MAPJOIN_26: 5
INFO : RECORDS_OUT_OPERATOR_MAP_0: 0
INFO : RECORDS_OUT_OPERATOR_RS_22: 1
INFO : RECORDS_OUT_OPERATOR_RS_24: 2
INFO : RECORDS_OUT_OPERATOR_SEL_20: 4
INFO : RECORDS_OUT_OPERATOR_SEL_25: 5
INFO : RECORDS_OUT_OPERATOR_SEL_28: 5
INFO : RECORDS_OUT_OPERATOR_TS_0: 5
INFO : RECORDS_OUT_OPERATOR_TS_2: 4
INFO : Shuffle Errors:
INFO : BAD_ID: 0
INFO : CONNECTION: 0
INFO : IO_ERROR: 0
INFO : WRONG_LENGTH: 0
INFO : WRONG_MAP: 0
INFO : WRONG_REDUCE: 0
INFO : Shuffle Errors_Reducer_3_INPUT_Map_2:
INFO : BAD_ID: 0
INFO : CONNECTION: 0
INFO : IO_ERROR: 0
INFO : WRONG_LENGTH: 0
INFO : WRONG_MAP: 0
INFO : WRONG_REDUCE: 0
INFO : TaskCounter_Map_1_INPUT_Reducer_3:
INFO : FIRST_EVENT_RECEIVED: 13025
INFO : INPUT_RECORDS_PROCESSED: 1
INFO : LAST_EVENT_RECEIVED: 13127
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : NUM_SHUFFLED_INPUTS: 1
INFO : SHUFFLE_BYTES: 24
INFO : SHUFFLE_BYTES_DECOMPRESSED: 10
INFO : SHUFFLE_BYTES_DISK_DIRECT: 0
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_TO_MEM: 24
INFO : SHUFFLE_PHASE_TIME: 13307
INFO : TaskCounter_Map_1_INPUT_calls:
INFO : INPUT_RECORDS_PROCESSED: 1
INFO : INPUT_SPLIT_LENGTH_BYTES: 487
INFO : TaskCounter_Map_1_OUTPUT_out_Map_1:
INFO : OUTPUT_RECORDS: 0
INFO : TaskCounter_Map_2_INPUT_tt:
INFO : INPUT_RECORDS_PROCESSED: 1
INFO : INPUT_SPLIT_LENGTH_BYTES: 319
INFO : TaskCounter_Map_2_OUTPUT_Reducer_3:
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : OUTPUT_BYTES: 3
INFO : OUTPUT_BYTES_PHYSICAL: 25
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 11
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_RECORDS: 1
INFO : SHUFFLE_CHUNK_COUNT: 1
INFO : SPILLED_RECORDS: 1
INFO : TaskCounter_Reducer_3_INPUT_Map_2:
INFO : ADDITIONAL_SPILLS_BYTES_READ: 25
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : COMBINE_INPUT_RECORDS: 0
INFO : FIRST_EVENT_RECEIVED: 32
INFO : LAST_EVENT_RECEIVED: 32
INFO : MERGED_MAP_OUTPUTS: 1
INFO : MERGE_PHASE_TIME: 60
INFO : NUM_DISK_TO_DISK_MERGES: 0
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : NUM_MEM_TO_DISK_MERGES: 0
INFO : NUM_SHUFFLED_INPUTS: 1
INFO : NUM_SKIPPED_INPUTS: 1
INFO : REDUCE_INPUT_GROUPS: 1
INFO : REDUCE_INPUT_RECORDS: 1
INFO : SHUFFLE_BYTES: 25
INFO : SHUFFLE_BYTES_DECOMPRESSED: 11
INFO : SHUFFLE_BYTES_DISK_DIRECT: 25
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_PHASE_TIME: 48
INFO : SPILLED_RECORDS: 1
INFO : TaskCounter_Reducer_3_OUTPUT_Map_1:
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : DATA_BYTES_VIA_EVENT: 0
INFO : OUTPUT_BYTES: 2
INFO : OUTPUT_BYTES_PHYSICAL: 48
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 10
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_RECORDS: 1
INFO : SPILLED_RECORDS: 0
INFO : org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters:
INFO : GROUPED_INPUT_SPLITS_Map_1: 1
INFO : GROUPED_INPUT_SPLITS_Map_2: 1
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 2 .......... container SUCCEEDED 1 1 0 0 0 0
Reducer 3 ...... container SUCCEEDED 2 2 0 0 0 0
Map 1 .......... container SUCCEEDED 1 1 0 0 0 0
----------------------------------------------------------------------------------------------
VERTICES: 03/03 [==========================>>] 100% ELAPSED TIME: 17.73 s
----------------------------------------------------------------------------------------------
... View more
04-21-2025
09:35 AM
did you mean below query works as expected? select b.proc,b.yr,b.RCNT from table2 b
WHERE length(b.PROC)=5 AND (b.ytype='1' or b.PROC IN (SELECT c.tet FROM table1 c)); if so , please share the complete query which is not working for better understanding.
... View more