Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1346 | 11-26-2019 11:47 PM | |
1304 | 11-25-2019 11:44 AM | |
9471 | 08-07-2019 12:48 AM | |
2173 | 04-17-2019 03:09 AM | |
3484 | 02-18-2019 12:23 AM |
08-06-2019
08:25 AM
But from the error message it looks like the service is using SSL. Are you sure that you are not using the ClouderaManager's Auto TLS feature? Verify it in the Impala Service configuration.
... View more
08-06-2019
08:23 AM
You can use a script like this to create snapshots of old and new files - i.e. search files which are older than 3 days and search for files which are newer than 3 days, just make sure, you use the correct path to the cloudera jars. In the case of CDH5.15: #!/bin/bash
now=`date +"%Y-%m-%dT%H:%M:%S"`
hdfs dfs -rm /data/cleanup_report/part=older3days/*
hdfs dfs -rm /data/cleanup_report/part=newer3days/*
hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /data -type d -mtime +3 | sed "s/^/${now}\tolder3days\t/" | hadoop fs -put - /data/cleanup_report/part=older3days/data.csv
hadoop jar /opt/cloudera/parcels/CDH/jars/search-mr-1.0.0-cdh5.15.1.jar org.apache.solr.hadoop.HdfsFindTool -find /data -type d -mtime -3 | sed "s/^/${now}\tnewer3days\t/" | hadoop fs -put - /data/cleanup_report/part=newer3days/data.csv Then create an external table with partitions on top of this HDFS folder.
... View more
08-06-2019
07:23 AM
Hi, I dont think it is so easy to do. At least I tried it once - downloading and compiling from the source. That part was the easier part - I just had to install some development libraries, gcc and other tools. But the issue is that HUE in CDH is running with a specific versions of python packages, specially pyOpenSSL, lpysaml, asn1crypto and others. The problem was, that I had to change (upgrade/downgrade) the system packages for making the "external" hue working, but then the other services and components were not working. I am sorry for this generic answer, dont have the exact details, already deleted that env. Please let me know if you find any solution to this.
... View more
08-06-2019
07:14 AM
Hi, it looks like a permission issue which is silently ignored. Can you please post the ACL's from the HDFS path, the root folder where the table is stored and the ACL's of the table path as well. Maybe you can check the hive server's log, if there is any kind of permission issue. And finally I would check the NameNode logs, of the file is not deleted because of missing permissions, it will be there as a log message. Tomas
... View more
05-18-2019
10:29 PM
No it was just one insert and after the repeat it succeeded, so I am not able to reproduce, and thus no patterns. CDH 5.15 Can you give me a detailed hint how to get the full stacktrace (from the Impala daemon?) of the failed fragment? I dont have the query profile (already deleted) but as I can remember one of the fragment (out of 10) was waiting for almost 2h to HDFS sink, others finished within a minute. Maybe it is a hdfs issue?
... View more
05-15-2019
11:54 PM
Hi,
one of our INSERT query failed in Impala, one of the fragment could not write the data into HDFS. The message is quite descriptive, but I am not able to find out what is the root cause of this failure - the HDFS did not reported any issue at that time, neither Impala daemons.
Query Status: Failed to write data (length: 38425) to Hdfs file:
hdfs://hanameservice/data/target_table/data/_impala_insert_staging/e64692c97276103f_d0ba1f0500000000/.e64692c97276103f-d0ba1f0500000001_1863061457_dir/e64692c97276103f-d0ba1f0500000001_2018403269_data.0.parq
Error(255): Unknown error 255 Root cause: IllegalMonitorStateException:
The Impala daemon log:
I0516 06:43:06.444519 32188 krpc-data-stream-recvr.cc:557] cancelled stream: fragment_instance_id=44488c54916f20c9:261655f90000000e node_id=5
I0516 06:43:06.444725 32188 query-state.cc:412] Instance completed. instance_id=44488c54916f20c9:261655f90000000e #in-flight=3 status=OK
I0516 06:43:06.444736 32188 query-exec-mgr.cc:155] ReleaseQueryState(): query_id=44488c54916f20c9:261655f900000000 refcnt=2
I0516 06:43:06.793349 32186 query-state.cc:412] Instance completed. instance_id=44488c54916f20c9:261655f900000015 #in-flight=2 status=OK
I0516 06:43:06.793372 32186 query-exec-mgr.cc:155] ReleaseQueryState(): query_id=44488c54916f20c9:261655f900000000 refcnt=1
I0516 06:43:06.899813 10865 status.cc:125] Failed to write data (length: 38425) to Hdfs file: hdfs://hanameservice/target_table/data/_impala_insert_staging/e64692c97276103f_d0ba1f0500000000/.e64692c97276103f-d0ba1f0500000001_1863061457_dir/e64692c97276103f-d0ba1f0500000001_2018403269_data.0.parq
Error(255): Unknown error 255
Root cause: IllegalMonitorStateException:
@ 0x966e3a
@ 0x107e9fb
@ 0xe1aea3
@ 0xe1b127
@ 0xe1c54d
@ 0xdecd8c
@ 0xdedbc3
@ 0xdef090
@ 0xbadc17
@ 0xbb06af
@ 0xb9e74a
@ 0xd607ef
@ 0xd60fea
@ 0x12d8b5a
@ 0x7fa01674fdd5
@ 0x7fa016478ead
I0516 06:43:06.944746 10865 runtime-state.cc:170] Error from query e64692c97276103f:d0ba1f0500000000: Failed to close HDFS file: hdfs://hanameservice/target_table/data/_impala_insert_staging/e64692c97276103f_d0ba1f0500000000/.e64692c97276103f-d0ba1f0500000001_1863061457_dir/e64692c97276103f-d0ba1f0500000001_2018403269_data.0.parq
Error(255): Unknown error 255
Root cause: IllegalMonitorStateException:
I0516 06:43:06.966231 10865 query-state.cc:412] Instance completed. instance_id=e64692c97276103f:d0ba1f0500000001 #in-flight=1 status=GENERAL: Failed to write data (length: 38425) to Hdfs file: hdfs://hanameservice/target_table/data/_impala_insert_staging/e64692c97276103f_d0ba1f0500000000/.e64692c97276103f-d0ba1f0500000001_1863061457_dir/e64692c97276103f-d0ba1f0500000001_2018403269_data.0.parq
Error(255): Unknown error 255
Root cause: IllegalMonitorStateException:
I0516 06:43:06.966250 10865 query-state.cc:425] Cancel: query_id=e64692c97276103f:d0ba1f0500000000
Any hints what can be the root cause of this issue?
Thanks
... View more
Labels:
- Labels:
-
Apache Impala
-
HDFS
05-15-2019
11:31 PM
Hi Cloudera, I see a lot of these warnings in Impala Daemon logs: W0516 07:12:24.227567 1049 ShortCircuitCache.java:826] ShortCircuitCache(0x119fb869): could not load 1399296933_BP-76826636-10.197.31.86-1501521881839 due to InvalidToken exception. Does it indicate some bad configuration? What can I do to eliminate these warnings? Thanks
... View more
Labels:
- Labels:
-
Apache Impala
04-17-2019
03:35 AM
1 Kudo
Make sure that the command above returns not just the name of the server, but the fully qualified domain name, so in your case "mugzy.c.essential-rider-208218.internal". You can do it by editing the /etc/hosts file. Or check the GCP documentation regarding the FQDN for VMs for your specific Linux OS.
... View more
04-10-2019
08:25 AM
Any hints? Cloudera guys? Thanks
... View more