Member since
09-16-2021
143
Posts
6
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
196 | 11-06-2023 03:10 AM | |
103 | 10-30-2023 07:17 AM | |
149 | 10-27-2023 12:07 AM | |
213 | 10-10-2023 10:57 AM | |
193 | 10-10-2023 10:50 AM |
11-06-2023
05:11 AM
The error message "HiveServer2Error: ImpalaRuntimeException: Error making 'add_partitions' RPC to Hive Metastore" typically indicates a problem when Impala, a distributed SQL query engine, tries to interact with the Hive Metastore service to add partitions. This error can be caused by several factors, and it usually points to an issue with the Hive Metastore service or the interaction between Impala and Hive. Here are some common causes and troubleshooting steps for this error: Hive Metastore Service Issues: Check if the Hive Metastore service is up and running. You should ensure that the Hive Metastore service is started and healthy. Verify that the Hive Metastore service is reachable from the machine where Impala is running. Network issues or firewall rules could prevent proper communication. Metastore Configuration: Verify the Metastore configuration in the Impala configuration files (impala-site.xml). Ensure that the Metastore URIs and authentication settings are correctly configured. Metastore Database Issues: Check the health and availability of the underlying database used by the Hive Metastore. Ensure that it's accessible, and there are no database connection issues. Verify that the Metastore database is not overwhelmed or experiencing performance problems. Authorization and Authentication: Verify that the Impala service has the necessary privileges and permissions to interact with the Hive Metastore. Check if Kerberos authentication is enabled, and ensure that the necessary credentials and keytabs are correctly configured. Log Analysis: Examine the logs of both Impala and Hive Metastore services for more detailed error messages. The logs may provide additional information about the root cause of the issue. Resource Limitations: Check if there are any resource limitations (e.g., memory, CPU) on the machines running Impala and the Hive Metastore. Resource shortages can lead to RPC failures. Software Versions: Ensure that Impala and Hive are compatible in terms of versions and dependencies. An incompatible combination of software versions can lead to errors. Cluster Issues: If you are running Impala in a distributed cluster, verify the overall health of the cluster. Other cluster-level issues can sometimes affect the interaction with the Hive Metastore. Network Issues: Check for network-related problems, such as DNS resolution or proxy settings, which can impede communication between Impala and the Hive Metastore. Database Locks: Database locks in the Metastore can sometimes cause issues. Check if there are any locks in the Hive Metastore database. If you have access to detailed logs or additional error messages, those can be particularly helpful in diagnosing the specific problem that led to this error. Depending on your environment and configurations, the resolution may involve addressing one or more of the above factors.
... View more
11-06-2023
03:10 AM
1 Kudo
The behavior you're observing is related to the precision differences between STRING and FLOAT data types. When you cast a STRING to a FLOAT, Hive attempts to interpret and represent the value as accurately as possible within the constraints of a FLOAT data type. FLOATs are limited in precision, and the fractional part might not be represented exactly. In your example, "5724.95" in FLOAT was stored as "5724.9501953125." This discrepancy is due to the way binary floating-point numbers work and how they might not be able to precisely represent certain decimal values. If you need exact decimal representation, you should consider using a DECIMAL data type instead of FLOAT. DECIMAL provides higher precision and is better suited for scenarios where you need to maintain the exact decimal value without potential loss of precision. Here's how you can cast your STRING column to DECIMAL to preserve the exact decimal value: SELECT a, CAST(a AS DECIMAL(20, 10)) AS exact_value FROM your_table; In this example, DECIMAL(20, 10) indicates a decimal type with a total width of 20 digits and 10 decimal places. This will preserve the exact decimal representation you need. Keep in mind that DECIMAL has higher storage requirements compared to FLOAT because it maintains precision, so choose the appropriate data type based on your requirements. Example : 0: jdbc:hive2://nightly-71x-zg-2.nightly-71x-> SELECT CAST('5724.9501953125' AS DECIMAL(20, 10)) AS decimal_value ;
INFO : Compiling command(queryId=hive_20231106110627_aaa98b66-4db6-4307-9be3-598018c13fbf): SELECT CAST('5724.9501953125' AS DECIMAL(20, 10)) AS decimal_value
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:[FieldSchema(name:decimal_value, type:decimal(20,10), comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20231106110627_aaa98b66-4db6-4307-9be3-598018c13fbf); Time taken: 0.062 seconds
INFO : Executing command(queryId=hive_20231106110627_aaa98b66-4db6-4307-9be3-598018c13fbf): SELECT CAST('5724.9501953125' AS DECIMAL(20, 10)) AS decimal_value
INFO : Completed executing command(queryId=hive_20231106110627_aaa98b66-4db6-4307-9be3-598018c13fbf); Time taken: 0.006 seconds
INFO : OK
+------------------+
| decimal_value |
+------------------+
| 5724.9501953125 |
+------------------+
... View more
10-31-2023
12:21 AM
The error message you're encountering, "java.util.LinkedList cannot be cast to org.apache.hive.hcatalog.mapreduce.InputJobInfo," typically occurs when there's a mismatch between the data types or structures during a Sqoop import from MySQL to Hive. Here are some steps to troubleshoot and potentially resolve the issue: Check Hive and HCatalog Compatibility: Ensure that the versions of Hive and HCatalog you are using are compatible with your version of Sqoop. If there's a version mismatch, consider updating or downgrading one of them to ensure compatibility. Check Your SQL Query: Review the SQL query you're using with Sqoop to import data. Ensure that it's correctly configured and that the source and target tables are correctly specified. Check Data Types: Ensure that the data types of the source MySQL table match the data types of the target Hive table. Inconsistencies in data types can lead to this error. Check Field Mapping: Verify that the fields in your MySQL table match the columns in your Hive table in terms of number and order. Ensure there are no extra or missing columns.
... View more
10-30-2023
07:17 AM
It appears that the JSON data contains multiple application entries within a single line, presented as struct data. This format makes schema creation challenging. To address this, you can leverage Spark to flatten the schema and store the data in Hive. This enables you to query the data conveniently from either Hive or Spark. Read the data JSON data df = spark.read.json("/user/hive/app_data_sample_data.json") First, explode the "app" array to separate rows from pyspark.sql.functions import col, explode, lit, struct
exploded_df = df.select(
explode(col("apps.app")).alias("app")
) Flatten and transform the exploded DataFrame # Flatten and transform the exploded DataFrame
flattened_df = exploded_df.select(
col("app.id").alias("id"),
col("app.user").alias("user"),
col("app.name").alias("name"),
col("app.queue").alias("queue"),
col("app.state").alias("state"),
col("app.finalstatus").alias("finalstatus"),
col("app.progress").alias("progress"),
col("app.trackingui").alias("trackingui"),
col("app.trackingurl").alias("trackingurl"),
col("app.diagnostics").alias("diagnostics"),
col("app.clusterid").alias("clusterid"),
col("app.applicationtype").alias("applicationtype"),
col("app.applicationtags").alias("applicationtags"),
col("app.priority").alias("priority"),
col("app.startedtime").alias("startedtime"),
col("app.launchtime").alias("launchtime"),
col("app.finishedtime").alias("finishedtime"),
col("app.elapsedtime").alias("elapsedtime"),
col("app.amcontainerlogs").alias("amcontainerlogs"),
col("app.amhosthttpaddress").alias("amhosthttpaddress"),
col("app.amrpcaddress").alias("amrpcaddress"),
col("app.masternodeid").alias("masternodeid"),
col("app.allocatedmb").alias("allocatedmb"),
col("app.allocatedvcores").alias("allocatedvcores"),
col("app.reservedmb").alias("reservedmb"),
col("app.reservedvcores").alias("reservedvcores"),
col("app.runningcontainers").alias("runningcontainers"),
col("app.memoryseconds").alias("memoryseconds"),
col("app.vcoreseconds").alias("vcoreseconds"),
col("app.queueusagepercentage").alias("queueusagepercentage"),
col("app.clusterusagepercentage").alias("clusterusagepercentage"),
col("app.preemptedresourcemb").alias("preemptedresourcemb"),
col("app.preemptedresourcevcores").alias("preemptedresourcevcores"),
col("app.numnonamcontainerpreempted").alias("numnonamcontainerpreempted"),
col("app.numamcontainerpreempted").alias("numamcontainerpreempted"),
col("app.preemptedmemoryseconds").alias("preemptedmemoryseconds"),
col("app.preemptedvcoreseconds").alias("preemptedvcoreseconds"),
col("app.logaggregationstatus").alias("logaggregationstatus"),
col("app.unmanagedapplication").alias("unmanagedapplication"),
col("app.amnodelabelexpression").alias("amnodelabelexpression"),
struct(
lit("lifetime").alias("type"),
lit("unlimited").alias("expirytime"),
lit(-1).alias("remainingtimeinseconds")
).alias("timeouts")
) Validate the flattened DataFrame flattened_df.show(truncate=False) If the data looks good , save the data as table. flattened_df.write.mode('overwrite').saveAsTable("app_data") Query form hive (beeline) +---------------------------------+----------------+-------------------------------------------+-------------------+-----------------+-----------------------+--------------------+----------------------+----------------------------------------------------+----------------------------------------------------+---------------------+---------------------------+----------------------------------------------------+--------------------+-----------------------+----------------------+------------------------+-----------------------+----------------------------------------------------+-----------------------------+------------------------+------------------------+-----------------------+---------------------------+----------------------+--------------------------+-----------------------------+-------------------------+------------------------+--------------------------------+----------------------------------+-------------------------------+-----------------------------------+--------------------------------------+-----------------------------------+----------------------------------+---------------------------------+--------------------------------+--------------------------------+---------------------------------+----------------------------------------------------+
| app_data.id | app_data.user | app_data.name | app_data.queue | app_data.state | app_data.finalstatus | app_data.progress | app_data.trackingui | app_data.trackingurl | app_data.diagnostics | app_data.clusterid | app_data.applicationtype | app_data.applicationtags | app_data.priority | app_data.startedtime | app_data.launchtime | app_data.finishedtime | app_data.elapsedtime | app_data.amcontainerlogs | app_data.amhosthttpaddress | app_data.amrpcaddress | app_data.masternodeid | app_data.allocatedmb | app_data.allocatedvcores | app_data.reservedmb | app_data.reservedvcores | app_data.runningcontainers | app_data.memoryseconds | app_data.vcoreseconds | app_data.queueusagepercentage | app_data.clusterusagepercentage | app_data.preemptedresourcemb | app_data.preemptedresourcevcores | app_data.numnonamcontainerpreempted | app_data.numamcontainerpreempted | app_data.preemptedmemoryseconds | app_data.preemptedvcoreseconds | app_data.logaggregationstatus | app_data.unmanagedapplication | app_data.amnodelabelexpression | app_data.timeouts |
+---------------------------------+----------------+-------------------------------------------+-------------------+-----------------+-----------------------+--------------------+----------------------+----------------------------------------------------+----------------------------------------------------+---------------------+---------------------------+----------------------------------------------------+--------------------+-----------------------+----------------------+------------------------+-----------------------+----------------------------------------------------+-----------------------------+------------------------+------------------------+-----------------------+---------------------------+----------------------+--------------------------+-----------------------------+-------------------------+------------------------+--------------------------------+----------------------------------+-------------------------------+-----------------------------------+--------------------------------------+-----------------------------------+----------------------------------+---------------------------------+--------------------------------+--------------------------------+---------------------------------+----------------------------------------------------+
| application_282828282828_12717 | xyz | xyz-4b6bdae2-1a0c-4772-bd8e-0d7454268b82 | root.users.dummy | finished | succeeded | 100.0 | history | http://dang:8088/proxy/application_282828282828_12717/ | session stats:submitteddags=1, successfuldags=1, faileddags=0, killeddags=0
| 282828282828 | aquaman | ABC,xyz_20221107070124_2beb5d90-24c7-4b1b-b977-3c9af1397195,userid=dummy | 0 | 1667822485626 | 1667822485767 | 1667822553365 | 67739 | http://dingdong:8042/node/containerlogs/container_e65_282828282828_12717_01_000001/xyz | dingdong:8042 | dingdong:46457 | dingdong:8041 | -1 | -1 | -1 | -1 | -1 | 1264304 | 79 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 | succeeded | false | | {"type":"lifetime","expirytime":"unlimited","remainingtimeinseconds":-1} |
| application_282828282828_12724 | xyz | xyz-94962a3e-d230-4fd0-b68b-01b59dd3299d | root.users.dummy | finished | succeeded | 100.0 | history | http://dang:8088/proxy/application_282828282828_12724/ | session stats:submitteddags=1, successfuldags=1, faileddags=0, killeddags=0
| 282828282828 | aquaman | ZZZ_,xyz_20221107070301_e6f788db-e39c-49b6-97d5-6a02ff994c00,userid=dummy | 0 | 1667822585231 | 1667822585437 | 1667822631435 | 46204 | http://ding:8042/node/containerlogs/container_e65_282828282828_12724_01_000002/xyz | ding:8042 | ding:46648 | ding:8041 | -1 | -1 | -1 | -1 | -1 | 5603339 | 430 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 | time_out | false | | {"type":"lifetime","expirytime":"unlimited","remainingtimeinseconds":-1} |
| application_282828282828_12736 | xyz | xyz-1a9c73ef-2992-40a5-aaad-9f0688bb04f4 | root.users.dummy | finished | succeeded | 100.0 | history | http://dang:8088/proxy/application_282828282828_12736/ | session stats:submitteddags=1, successfuldags=1, faileddags=0, killeddags=0
| 282828282828 | aquaman | BLAHBLAH,xyz_20221107070609_8d261352-3efa-46c5-a5a0-8a3cd745d180,userid=dummy | 0 | 1667822771170 | 1667822773663 | 1667822820351 | 49181 | http://dong:8042/node/containerlogs/container_e65_282828282828_12736_01_000001/xyz | dong:8042 | dong:34266 | dong:8041 | -1 | -1 | -1 | -1 | -1 | 1300011 | 89 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 | succeeded | false | | {"type":"lifetime","expirytime":"unlimited","remainingtimeinseconds":-1} |
| application_282828282828_12735 | xyz | xyz-d5f56a0a-9c6b-4651-8f88-6eaff5953777 | root.users.dummy | finished | succeeded | 100.0 | history | http://dang:8088/proxy/application_282828282828_12735/ | session stats:submitteddags=1, successfuldags=1, faileddags=0, killeddags=0
| 282828282828 | aquaman | HAHAHA_,xyz_20221107070605_a082d9d8-912f-4278-a2ef-5dfe66089fd7,userid=dummy | 0 | 1667822766897 | 1667822766999 | 1667822796759 | 29862 | http://dung:8042/node/containerlogs/container_e65_282828282828_12735_01_000001/xyz | dung:8042 | dung:42765 | dung:8041 | -1 | -1 | -1 | -1 | -1 | 669695 | 44 | 0.0 | 0.0 | 0 | 0 | 0 | 0 | 0 | 0 | succeeded | false | | {"type":"lifetime","expirytime":"unlimited","remainingtimeinseconds":-1} |
+---------------------------------+----------------+-------------------------------------------+-------------------+-----------------+-----------------------+--------------------+----------------------+----------------------------------------------------+----------------------------------------------------+---------------------+---------------------------+----------------------------------------------------+--------------------+-----------------------+----------------------+------------------------+-----------------------+----------------------------------------------------+-----------------------------+------------------------+------------------------+-----------------------+---------------------------+----------------------+--------------------------+-----------------------------+-------------------------+------------------------+--------------------------------+----------------------------------+-------------------------------+-----------------------------------+--------------------------------------+-----------------------------------+----------------------------------+---------------------------------+--------------------------------+--------------------------------+---------------------------------+----------------------------------------------------+
... View more
10-27-2023
12:07 AM
The error message "Unknown HS2 problem when communicating with Thrift server" typically indicates that there is an issue when trying to communicate with the Hive Server 2 (HS2) through its Thrift interface. This error can occur for various reasons, and troubleshooting it may require some investigation. Here are some common steps to help resolve this issue: Check Hive Server Status: Ensure that the Hive Server 2 is up and running and that it's reachable from your client. You can check its status and logs to see if there are any errors or issues reported. Network Connectivity: Verify that there are no network-related issues that might be preventing your client from connecting to the Hive Server. Check firewalls, network configurations, and any potential network interruptions. Hive Configuration: Review the Hive server's configuration to ensure it's correctly set up. Pay attention to security configurations, like authentication and authorization settings. Thrift Protocol Version: Ensure that the Thrift protocol version used by your client matches the version supported by the Hive Server. Mismatched protocol versions can lead to communication problems. Client-Side Issues: Check the client application or code you are using to interact with Hive. Ensure that it's properly configured and making the correct requests to the Hive Server. Logs and Error Messages: Examine the logs and error messages in more detail to get specific information about what might be causing the problem. This can help pinpoint the issue. Server Version Compatibility: Ensure that the client and server components (Hive client and Hive Server) are compatible in terms of versions. Incompatible versions can lead to communication issues. Authentication and Authorization: If your Hive server is configured with authentication and authorization, ensure that you have the necessary permissions and credentials to access it. Load and Resource Constraints: Check if the Hive Server is under heavy load or if there are resource constraints that might be affecting its ability to respond to client requests. Driver and Libraries: Ensure that you are using the correct driver or libraries for your client application. If you're using JDBC or ODBC, make sure the corresponding driver is installed and configured correctly. If you continue to face issues after performing these checks, it may be necessary to provide more specific error messages or details about your environment to diagnose the problem further.
... View more
10-26-2023
11:49 PM
Please follow the below article and validate the same. https://my.cloudera.com/knowledge/How-to-configure-HDFS-and-Hive-to-use-different-JCEKS-and?id=326056
... View more
10-26-2023
11:36 PM
It seems that you are facing a situation where Query 1 returns results, Query 2 (with an additional field) does not return results, but when using SELECT *, results are returned, and when trimming all the condition fields, results are also returned. This behavior can be attributed to the way you've constructed your queries: Query 1: This query specifies certain conditions and fields, which may match records in your database. Query 2: In Query 2, you've added an additional field (af.unq_id_src_stm) to the SELECT statement. This change in the SELECT clause can affect the results returned. It's possible that the additional field is causing the query not to match any records due to the way the data is structured or the filter conditions. Using SELECT *: When you use SELECT *, it selects all fields in the result set, and it may include fields that are necessary for the join conditions or other aspects of the query. By selecting all fields, you are getting the complete result set. Trimming Condition Fields: If you trim or remove condition fields, it can affect the filter criteria, and as a result, the query may return results that were previously excluded by the conditions. To resolve the issue in Query 2, you may need to carefully review the additional field you added and ensure it doesn't unintentionally affect the join conditions or filter criteria. Additionally, ensure that the data you are querying contains the values specified in the conditions and the new field. You should also consider whether the additional field is really needed for your analysis. If it's not necessary, you can remove it to get the results you expect
... View more
10-13-2023
04:35 AM
Verify the submission queue for application_1440861466017_0007 and ensure it has sufficient resources to launch the application.
... View more
10-13-2023
04:32 AM
In Hadoop, you can use the Hadoop Distributed File System (HDFS) shell commands to remove files that meet certain criteria, such as being older than a certain number of days or greater than a certain number of files in a folder. You can achieve this using HDFS shell commands in a shell script. Here's how you can do it: To remove all files greater than 100 files in a folder: hadoop fs -count -q -h <folder_path>: This command retrieves a count of files in the specified folder, along with their sizes and other information. awk '$2 > 100 {print $3}': This awk command filters the output to select only those file paths where the file count is greater than 100. xargs -I {} hadoop fs -rm {}: This part of the command reads the file paths provided by awk and deletes those files using hadoop fs -rm To remove all files older than 10 days in a folder: hadoop fs -ls <folder_path> | awk -v cutoff=$(date -d "10 days ago" +%s) '{if ($6 < cutoff) print $8}' | xargs -I {} hadoop fs -rm {} hadoop fs -ls <folder_path>: This command lists the files in the specified folder. awk -v cutoff=$(date -d "10 days ago" +%s) '{if ($6 < cutoff) print $8}': This awk command calculates the timestamp for 10 days ago and compares it to the modification timestamps of the files. It selects files with modification timestamps older than 10 days. xargs -I {} hadoop fs -rm {}: This part of the command reads the file paths provided by awk and deletes those files using hadoop fs -rm.
... View more
10-10-2023
10:57 AM
1 Kudo
In Hive, you can achieve a similar result as the UNPIVOT operation in SQL Server by using the LATERAL VIEW and lateral VIEW OUTER explode functions to split the columns into rows. Here's how you can convert your SQL Server query to Hive: SELECT x, check AS y, split AS z
FROM dbo.tbl1
LATERAL VIEW OUTER explode(array(1, y2, y3, y4, y5, y6, y7, y8, y9, y10)) tbl AS split; In this Hive query: LATERAL VIEW OUTER explode is used to split the values from columns y2 to y10 into separate rows. The AS clause assigns aliases to the columns, where split corresponds to the values from the UNPIVOTed columns, check corresponds to the column name (y), and x remains unchanged. This query will produce a result similar to the UNPIVOT operation in SQL Server, where the values from columns y2 to y10 are split into separate rows along with their corresponding x and y values. In Hive, you can achieve a similar result as the PIVOT operation in SQL Server by using conditional aggregation along with CASE statements. Here's how you can convert your SQL Server query to Hive: SELECT *
FROM (
SELECT a, b, c, cbn_TYPE
FROM tbl2
) SRC
LEFT JOIN (
SELECT
a,
SUM(CASE WHEN cbn_TYPE = 'ONE TQ FOUR' THEN TOTAL_AMOUNT ELSE 0 END) AS ONE_TQ_FOUR,
SUM(CASE WHEN cbn_TYPE = 'going loss' THEN TOTAL_AMOUNT ELSE 0 END) AS going_loss,
SUM(CASE WHEN cbn_TYPE = 'COSTS LEAVING team sales' THEN TOTAL_AMOUNT ELSE 0 END) AS COSTS_LEAVING_team_sales,
SUM(CASE WHEN cbn_TYPE = 'profit' THEN TOTAL_AMOUNT ELSE 0 END) AS profit,
SUM(CASE WHEN cbn_TYPE = 'check money' THEN TOTAL_AMOUNT ELSE 0 END) AS check_money
FROM tbl2
GROUP BY a
) PIV
ON SRC.a = PIV.a; In this Hive query: We first create an intermediate result set (PIV) that calculates the sums for each cbn_TYPE using conditional aggregation (SUM with CASE statements). The LEFT JOIN is used to combine the original source table (SRC) with the aggregated result (PIV) based on the common column a. The result will have columns a, b, c, and the pivoted columns ONE_TQ_FOUR, going_loss, COSTS_LEAVING_team_sales, profit, and check_money, similar to the PIVOT operation in SQL Server. This query essentially performs a manual pivot operation in Hive by using conditional aggregation to calculate the sums for each cbn_TYPE and then joining the results back to the original table. In Hive, you can use the CASE statement to achieve the same result as the SQL Server expression NULLIF(ISNULL(abc.Tc, 0) + ISNULL(abc.YR, 0), 0). Here's the equivalent Hive query: SELECT
CASE
WHEN (abc.Tc IS NULL AND abc.YR IS NULL) OR (abc.Tc + abc.YR = 0) THEN NULL
ELSE abc.Tc + abc.YR
END AS result
FROM your_table AS abc; In this Hive query: We use the CASE statement to conditionally calculate the result. If both abc.Tc and abc.YR are NULL, or if their sum is equal to 0, we return NULL. Otherwise, we return the sum of abc.Tc and abc.YR. This query replicates the behavior of the NULLIF(ISNULL(abc.Tc, 0) + ISNULL(abc.YR, 0), 0) expression in SQL Server, providing a Hive-compatible solution for achieving the same result.
... View more