Member since
09-16-2021
423
Posts
55
Kudos Received
39
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 876 | 10-22-2025 05:48 AM | |
| 880 | 09-05-2025 07:19 AM | |
| 1686 | 07-15-2025 02:22 AM | |
| 2269 | 06-02-2025 06:55 AM | |
| 2490 | 05-22-2025 03:00 AM |
08-13-2024
10:04 PM
Thanks @ggangadharan As far as I can see HBase is up and running but I found something in the HBase log: 2024-08-13 21:53:30,583 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for hive/HOST@REALM (auth:KERBEROS) 2024-08-13 21:53:30,584 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from xx.xxx.xx.xxx:55106, version=2.2.3.7.1.7.0-551, sasl=true, ugi=hive/HOST@REALM (auth:KERBEROS), service=ClientService 2024-08-13 21:53:30,584 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hive/HOST@REALM (auth:KERBEROS) for protocol=interface org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingInterface 2024-08-13 21:53:38,853 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39718 2024-08-13 21:53:38,853 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39718 2024-08-13 21:53:39,056 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39720 2024-08-13 21:53:39,056 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39720 2024-08-13 21:53:39,361 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39722 2024-08-13 21:53:39,361 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39722 2024-08-13 21:53:39,869 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39724 2024-08-13 21:53:39,870 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39724 2024-08-13 21:53:40,877 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39726 2024-08-13 21:53:40,877 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39726 2024-08-13 21:53:42,882 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39728 2024-08-13 21:53:42,882 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39728 2024-08-13 21:53:46,219 INFO org.apache.hadoop.hbase.io.hfile.LruBlockCache: totalSize=9.18 MB, freeSize=12.20 GB, max=12.21 GB, blockCount=5, accesses=7481, hits=7461, hitRatio=99.73%, , cachingAccesses=7469, cachingHits=7461, cachingHitsRatio=99.89%, evictions=2009, evicted=0, evictedPerRun=0.0 2024-08-13 21:53:46,914 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39730 2024-08-13 21:53:46,914 WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x0A\x04hi from xx.xxx.xx.xxx:39730 2024-08-13 21:53:50,477 INFO org.apache.hadoop.hbase.ScheduledChore: CompactionThroughputTuner average execution time: 8653 ns. 2024-08-13 21:53:50,572 INFO org.apache.hadoop.hbase.replication.regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2024-08-13 21:53:55,216 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for hbase/HOST@REALM (auth:KERBEROS) 2024-08-13 21:53:55,216 INFO SecurityLogger.org.apache.hadoop.hbase.Server: Connection from xx.xxx.xx.xxx:55174, version=2.2.3.7.1.7.0-551, sasl=true, ugi=hbase/HOST@REALM (auth:KERBEROS), service=ClientService 2024-08-13 21:53:55,216 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for hbase/HOST@REALM (auth:KERBEROS) for protocol=interface org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingInterface 2024-08-13 21:53:56,136 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping HBase metrics system... 2024-08-13 21:53:56,136 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: HBase metrics system stopped. 2024-08-13 21:53:56,638 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties 2024-08-13 21:53:56,641 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2024-08-13 21:53:56,641 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: HBase metrics system started This Warning (WARN org.apache.hadoop.hbase.ipc.RpcServer: Expected HEADER=HBas but received HEADER=\x00\x00\x013 from xx.xxx.xx.xxx:39730) only appears for the statement: insert overwrite table managed_ml select key, cf1_id , cf1_name from c_0external_ml; Others statements like insert into c_0external_ml values (1,2,3); runs perfectly. Does this error sound familiar to you???
... View more
07-09-2024
03:18 AM
1 Kudo
@uinn Help us in letting know the heap size set for both Hive and Hive Metastore services. Is the issue happens while executing few specific queries or continuously the pause (JVM pause) is happening? Is it appears at HiveMetastore or at Hiveserver2 level?
... View more
06-07-2024
04:17 AM
1 Kudo
2. An alternative is to write a script (e.g., Bash) that interacts with Hive and potentially your desired output format.
... View more
05-31-2024
02:17 PM
1 Kudo
@adsejnf Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
05-28-2024
09:22 PM
When an application or job that typically completes in a short time is taking significantly longer than expected, it's essential to systematically troubleshoot the issue to identify and resolve the bottleneck. Here are some steps and areas to focus on when diagnosing performance issues in such scenarios: 1. Understand the Baseline and Gather Information Historical Performance Data: Compare the current run with previous runs. Identify what has changed in terms of input size, configuration, environment, etc. Logs and Metrics: Gather logs and metrics from the application, YARN ResourceManager, and NodeManager. 2. Monitor Resource Utilization CPU, Memory, and Disk Usage: Check the resource usage on the nodes running the application. High CPU, memory, or disk I/O usage can indicate bottlenecks. Network Utilization: Check network usage, especially if the job involves significant data transfer between nodes. 3. Examine YARN and Application Logs YARN Logs: Access the logs through the YARN ResourceManager web UI. Look for errors, warnings, and unusual delays. Application Master (AM) Logs: Review the AM logs for any signs of retries, timeouts, or other issues. Container Logs: Check the logs of individual containers for errors and performance issues. 4. Check for Resource Contention NodeManager Logs: Look for signs of resource contention, such as high wait times for container allocation. Cluster Load: Check if other jobs are running concurrently and consuming significant resources. 5. Investigate Job Configuration Parallelism: Ensure the job is correctly configured for parallel execution (e.g., number of mappers and reducers in a MapReduce job). Resource Allocation: Verify that the job has sufficient resources allocated (e.g., memory, vCores). 6. Data Skew and Distribution Data Skew: Analyze the input data for skew. Uneven data distribution can cause some tasks to take much longer than others. Task Distribution: Check if certain tasks or stages are taking disproportionately longer. 7. Network and I/O Bottlenecks Shuffle and Sort Phase: In Hadoop and Spark, the shuffle phase can be a bottleneck. Monitor the shuffle performance and look for skew or excessive data transfer. HDFS or Storage I/O: Ensure that the underlying storage (HDFS, S3, etc.) is performing optimally and there are no bottlenecks. 8. Garbage Collection and JVM Tuning GC Logs: If the application is JVM-based, check the garbage collection logs for excessive GC pauses. JVM Heap Size: Verify that the JVM heap size is appropriately configured to avoid frequent GC. 9. Configuration Parameters and Tuning YARN Configuration: Check for misconfigurations in YARN resource allocation settings. Application-specific Tuning: Tune parameters specific to the application framework (e.g., Spark, MapReduce). 10. External Dependencies External Services: If the application interacts with external services (e.g., databases, APIs), ensure they are not the bottleneck. Dependency Failures: Look for timeouts or failures in external service calls. Detailed Steps for Specific Frameworks For Hadoop MapReduce Jobs Check Job History Server: Analyze the job in the Job History Server web UI. Identify slow tasks and investigate their logs. Analyze Task Attempts: Look for tasks that have failed and retried multiple times. Identify tasks with unusually high execution times. For Apache Spark Jobs Spark UI: Use the Spark web UI to analyze stages, tasks, and jobs. Look for stages that have long task durations or high task counts. Executor Logs: Check the logs of individual Spark executors for errors and warnings. Driver Logs: Examine the driver logs for signs of job bottlenecks or delays. Conclusion Systematically troubleshooting a job that is taking longer than usual involves a combination of monitoring resource utilization, examining logs, analyzing job configurations, and investigating data distribution and skew. By following these steps and using the right tools, you can identify and resolve the performance bottlenecks effectively. If the issue persists, consider breaking down the problem further or seeking help from more detailed profiling tools or experts familiar with your specific application framework and environment.
... View more
05-28-2024
09:10 PM
Data Loss: When you perform an INSERT OVERWRITE operation in Hive, it completely replaces the data in the target table or partition. if the data is not correctly inserted, it can result in data loss. Column Qualifiers: HBase stores data in a key-value format with rows, column families, and column qualifiers. Issues with specific column qualifiers could be due to schema mismatches or data type incompatibilities. Upserting Data: Upserting (update or insert) in HBase via Hive can be challenging since Hive primarily supports batch processing and doesn't have native support for upsert operations directly. As HBASE handlers tables are external tables. Best Practices and Troubleshooting Schema Matching: Ensure that the schema of the Hive table and the HBase table matches, especially the data types and column qualifiers. Data Types: Be cautious with data types. HBase stores everything as bytes, so type conversions must be handled properly. Error Handling: Implement proper error handling and logging to identify issues during data insertion.
... View more
05-15-2024
05:18 AM
1 Kudo
Any tips for solution?
... View more
03-27-2024
11:44 PM
2 Kudos
The error message indicates Tableau is having trouble connecting to your "ShowData" data source and there's an issue with the SQL query it's trying to run on your Hive database. Let's break down the error and potential solutions: Error Breakdown: Bad Connection: Tableau can't establish a connection to the Hive database. Error Code: B19090E0: Generic Tableau error for connection issues. Error Code: 10002: Hive specific error related to the SQL query. SQL state: TStatus(statusCode:ERROR_STATUS): Hive is encountering an error during query processing. Invalid column reference 'tableausql.fieldname': The specific error points to an invalid column reference in the query. Potential Solutions: Verify Database Connection: Ensure the Hive server is running and accessible from Tableau. Double-check the connection details in your Tableau data source configuration, including server address, port, username, and password. Review SQL Query: The error message highlights "tableausql.fieldname" as an invalid column reference. Check if this field name actually exists in your Hive table. There might be a typo or a case-sensitivity issue. If "tableausql" is a prefix Tableau adds, ensure it's not causing conflicts with your actual column names. Check for Unsupported Functions: In rare cases, Tableau might try to use functions not supported by Hive.
... View more
03-06-2024
02:16 AM
1 Kudo
Hi @Sidhartha Could you please try the following sample def convertDatatype(datatype: String): DataType = {
val convert = datatype match {
case "string" => StringType
case "short" => ShortType
case "int" => IntegerType
case "bigint" => LongType
case "float" => FloatType
case "double" => DoubleType
case "decimal" => DecimalType(38,30)
case "date" => TimestampType
case "boolean" => BooleanType
case "timestamp" => TimestampType
}
convert
}
val input_data = List(Row(1l, "Ranga", 27, BigDecimal(306.000000000000000000)), Row(2l, "Nishanth", 6, BigDecimal(606.000000000000000000)))
val input_rdd = spark.sparkContext.parallelize(input_data)
val hiveCols="id:bigint,name:string,age:int,salary:decimal"
val schemaList = hiveCols.split(",")
val schemaStructType = new StructType(schemaList.map(col => col.split(":")).map(e => StructField(e(0), convertDatatype(e(1)), true)))
val myDF = spark.createDataFrame(input_rdd, schemaStructType)
myDF.printSchema()
myDF.show()
val myDF2 = myDF.withColumn("new_salary", col("salary").cast("double"))
myDF2.printSchema()
myDF2.show()
... View more
03-06-2024
12:38 AM
It seems like there might be an issue with the way you're using single quotes in the loop. The variable eachline should be expanded, but it won't if it's enclosed in single quotes. Try using double quotes around the variable and see if that resolves the issue. Here's the corrected loop: for eachline in "${testarray[@]}"
do
beeline -f "${eachline}.sql"
done This way, the value of eachline will be correctly expanded when constructing the command. Also, ensure that the SQL files are present in the correct path or provide the full path if needed. If the issue persists, please provide more details on the error message or behavior you're experiencing for further assistance.
... View more