Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1346 | 11-26-2019 11:47 PM | |
1304 | 11-25-2019 11:44 AM | |
9471 | 08-07-2019 12:48 AM | |
2175 | 04-17-2019 03:09 AM | |
3485 | 02-18-2019 12:23 AM |
07-21-2017
02:45 AM
The latest version of Director (2.4 I assume). I really tooked a simple wizard, went throught with 1 master, 3 workers, 1 gateway, installed the cluster and checked the java in every host (1.6) AMI Centos 7, tried region eu-west-1, us-east-1.
... View more
07-20-2017
04:50 AM
I wanted to start a new thread but having very similiar question. Why is Cloudera Manager / Director deploying java 1.6 when Cloudera clearly states that the lowest possible supported version is 1.7? Thanks for the clarification https://www.cloudera.com/documentation/enterprise/5-4-x/topics/cdh_ig_req_supported_versions.html#concept_pdd_kzf_vp
... View more
06-27-2017
11:18 AM
Catalog did not failed but you are right, it is just a matter of time when my kernel will be updated on master, so it would probably crash also.
... View more
06-27-2017
08:35 AM
3 Kudos
Found out that it is related to this issue https://issues.apache.org/jira/browse/DAEMON-363 So editing in CM the Impala Daemon properties: Impala Daemon Environment Advanced Configuration Snippet (Safety Valve) JAVA_TOOL_OPTIONS=-Xss2m Fixed the problem.
... View more
06-27-2017
08:21 AM
Hi, after updateing my data nodes and kernel, and restarting the cluster Impala failed to start the Daemons. I tried to restart the impala daemon, but did not helped. Also tested on CDH 5.10 and CDH 5.11.1. Tried to install different version of Java as well, downgrade, didnt helped either. Running Centos 7 and CDH 5.11.1 Any suggestions how to avoid this error? OS reinstall is my last option, but I do not want to clean up the whole cluster. #
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007fa6b9f80c18, pid=3819, tid=0x00007fa6cfdb4900
#
# JRE version: (8.0_131-b11) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# j java.lang.Object.<clinit>()V+0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x0000000004c94000): JavaThread "Unknown thread" [_thread_in_Java, id=3819, stack(0x00007ffc40b88000,0x00007ffc40c88000)]
siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 0x00007ffc40c77420
Registers:
RAX=0x00007fa6b50f5a68, RBX=0x00007fa6b5047ca8, RCX=0x0000000000000008, RDX=0x00007fa6cf0fff30
RSP=0x00007ffc40c7f420, RBP=0x00007ffc40c7f460, RSI=0x0000000000000004, RDI=0x0000000004c94000
R8 =0x0000000000000000, R9 =0x0000000000000003, R10=0x0000000000000000, R11=0x0000000000000002
R12=0x0000000000000000, R13=0x00007fa6b5047c98, R14=0x00007ffc40c7f468, R15=0x0000000004c94000
RIP=0x00007fa6b9f80c18, EFLAGS=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000006
TRAPNO=0x000000000000000e
Top of Stack: (sp=0x00007ffc40c7f420)
0x00007ffc40c7f420: 00007ffc40c7f420 00007fa6b5047c98
0x00007ffc40c7f430: 00007ffc40c7f468 00007fa6b50f1040
0x00007ffc40c7f440: 0000000000000000 00007fa6b5047ca8
0x00007ffc40c7f450: 0000000000000000 00007ffc40c7f470
0x00007ffc40c7f460: 00007ffc40c7f4d0 00007fa6b9f6e4e7
0x00007ffc40c7f470: 00007ffc00001fa0 0000000000000000
0x00007ffc40c7f480: 0000000004c94000 00007ffc40c7f550
0x00007ffc40c7f490: 00007fa6b5047ca8 00007ffc40c7f510
0x00007ffc40c7f4a0: 00007ffc40c7f510 00007ffc40c7f6e8
0x00007ffc40c7f4b0: 00007fa60000000a 00007fa6b5047ca8
0x00007ffc40c7f4c0: 00007fa6b9f809c0 00007ffc40c7f658
0x00007ffc40c7f4d0: 00007ffc40c7f640 00007fa6ce7cfd16
0x00007ffc40c7f4e0: 0000000000000000 0000000004c94000
0x00007ffc40c7f4f0: 00007ffc40c7f650 00007ffc40c7f6e0
0x00007ffc40c7f500: 00007fa6b9f809c0 00007fa60000000a
0x00007ffc40c7f510: 0000000004c94000 0000000004b78140
0x00007ffc40c7f520: 00007fa6b5047ca8 0000000000000000
0x00007ffc40c7f530: 0000000000000000 0000000000000000
0x00007ffc40c7f540: 0000000000000000 00007ffc40c7f6e0
0x00007ffc40c7f550: 0000000004c94000 0000000004b65b40
0x00007ffc40c7f560: 0000000004b5c5a0 0000000004b5c5c0
0x00007ffc40c7f570: 0000000004b5c688 00000000000000d8
0x00007ffc40c7f580: 00007ffc40c7f830 0000000004c94000
0x00007ffc40c7f590: 00007fa6b5047ca8 0000000004c94000
0x00007ffc40c7f5a0: 0000000004b618d0 00007fa6b5049648
0x00007ffc40c7f5b0: 00007fa6b5047ca8 0000000004c94000
0x00007ffc40c7f5c0: 00007ffc40c7f720 00007fa6ce910043
0x00007ffc40c7f5d0: 0000000004c94000 00007fa6ce9f1e67
0x00007ffc40c7f5e0: 00007fa6b5047ca8 0000000004c94000
0x00007ffc40c7f5f0: 00007ffc40c7f6d0 0000000000000000
0x00007ffc40c7f600: 00007fa6b5047ca8 0000000004c94000
0x00007ffc40c7f610: 0000000004b5c5a0 00007ffc40c7f650
Instructions: (pc=0x00007fa6b9f80c18)
0x00007fa6b9f80bf8: 00 d0 ff ff 89 84 24 00 c0 ff ff 89 84 24 00 b0
0x00007fa6b9f80c08: ff ff 89 84 24 00 a0 ff ff 89 84 24 00 90 ff ff
0x00007fa6b9f80c18: 89 84 24 00 80 ff ff 89 84 24 00 70 ff ff 89 84
0x00007fa6b9f80c28: 24 00 60 ff ff 89 84 24 00 50 ff ff 89 84 24 00
Register to memory mapping:
RAX=0x00007fa6b50f5a68 is pointing into metadata
RBX={method} {0x00007fa6b5047ca8} '<clinit>' '()V' in 'java/lang/Object'
RCX=0x0000000000000008 is an unknown value
RDX=0x00007fa6cf0fff30: <offset 0xfc1f30> in /usr/java/jdk1.8.0_131/jre/lib/amd64/server/libjvm.so at 0x00007fa6ce13e000
RSP=0x00007ffc40c7f420 is pointing into the stack for thread: 0x0000000004c94000
RBP=0x00007ffc40c7f460 is pointing into the stack for thread: 0x0000000004c94000
RSI=0x0000000000000004 is an unknown value
RDI=0x0000000004c94000 is a thread
R8 =0x0000000000000000 is an unknown value
R9 =0x0000000000000003 is an unknown value
VM Arguments:
jvm_args: -Djava.library.path=/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/bin/../lib/impala/lib
java_command: <unknown>
java_class_path (initial): /usr/share/java/mysql-connector-java.jar:/usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar:/usr/share/java/oracle-connector-java.jar:/var/lib/impala/*.jar:/usr/share/java/mysql-connector-java.jar:/run/cloudera-scm-agent/process/319-impala-IMPALAD/impala-conf:/run/cloudera-scm-agent/process/319-impala-IMPALAD/hadoop-conf:/run/cloudera-scm-agent/process/319-impala-IMPALAD/hive-conf:/run/cloudera-scm-agent/process/319-impala-IMPALAD/hbase-conf:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/libthrift-0.9.0.jar::/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/ST4-4.0.4.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/ant-1.5.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/ant-1.9.1.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/ant-contrib-1.0b3.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/ant-launcher-1.9.1.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/antlr-2.7.7.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/antlr-runtime-3.3.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/apache-log4j-extras-1.2.17.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/asm-3.1.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib/asm-commons-3.1.jar:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/
Launcher Type: generic
Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_131
JAVA_TOOL_OPTIONS=
PATH=/sbin:/usr/sbin:/bin:/usr/bin
LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/lib:/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/impala/sbin-retail:/usr/java/jdk1.8.0_131/jre/lib/amd64:/usr/java/jdk1.8.0_131/jre/lib/amd64:/usr/java/jdk1.8.0_131/jre/lib/amd64/server:
SHELL=/bin/bash
Signal Handlers:
SIGSEGV: [libjvm.so+0xac8af0], sa_mask[0]=11111111111111111111111111111110, sa_flags=SA_ONSTACK|SA_SIGINFO
SIGBUS: [libjvm.so+0xac8af0], sa_mask[0]=11111111111111111111111111111110, sa_flags=SA_RESTART|SA_SIGINFO
SIGFPE: [impalad+0x178a0e0], sa_mask[0]=00010111001000000000000000000000, sa_flags=SA_ONSTACK|SA_SIGINFO
SIGPIPE: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGXFSZ: SIG_IGN, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGILL: [impalad+0x178a0e0], sa_mask[0]=00010111001000000000000000000000, sa_flags=SA_ONSTACK|SA_SIGINFO
SIGUSR1: [impalad+0x79a640], sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGUSR2: [libjvm.so+0x923610], sa_mask[0]=00000000000000000000000000000000, sa_flags=SA_RESTART|SA_SIGINFO
SIGHUP: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGINT: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGTERM: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
SIGQUIT: SIG_DFL, sa_mask[0]=00000000000000000000000000000000, sa_flags=none
--------------- S Y S T E M ---------------
OS:CentOS Linux release 7.3.1611 (Core)
uname:Linux 3.10.0-514.21.2.el7.x86_64 #1 SMP Tue Jun 20 12:24:47 UTC 2017 x86_64
libc:glibc 2.17 NPTL 2.17
rlimit: STACK 8192k, CORE 0k, NPROC 65536, NOFILE 32768, AS infinity
load average:0.56 0.21 0.08
/proc/meminfo:
MemTotal: 7231176 kB
MemFree: 323736 kB
MemAvailable: 3480696 kB
Buffers: 4060 kB
Cached: 3365420 kB
SwapCached: 0 kB
Active: 3480808 kB
Inactive: 3267080 kB
Active(anon): 3379768 kB
Inactive(anon): 16648 kB
Active(file): 101040 kB
Inactive(file): 3250432 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 224 kB
Writeback: 0 kB
AnonPages: 3378180 kB
Mapped: 48992 kB
... View more
Labels:
- Labels:
-
Apache Impala
06-23-2017
02:59 AM
1 Kudo
I am sorry, but probably my mistake. I dont know why but now the sqlContext.sql("show tables").collect gathers the tables so I am able to access the metastore. The warning message is still there during spark-shell startup. But it works.
... View more
06-22-2017
11:01 AM
The interesting thing is, that if I download spark 2.1 and configure it (point to hadoop conf dir in etc), it just works ok in YARN a does not have a problem to show databases or show tables.
... View more
06-22-2017
10:35 AM
Hi,
I just did a fresh clean install of CDH 5.11 and with Hive and Spark, everythin in dev mode - so embedded database, no HA, simple setup.
When I try to run spark-shell a got an exception:
Spark context available as sc (master = yarn-client, app id = application_1498152485620_0001). 17/06/22 19:30:59 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.1.0 17/06/22 19:30:59 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException SQL context available as sqlContext.
Is this some bug related to Spark 1.6?
Thanks
Tomas
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
06-19-2017
03:21 AM
I think it is just because how Kudu is implemented. The visit count is not the primary key of the table. Nor the browser column. For atomic incrementing you have to have the data in one location on one server.
... View more
05-15-2017
11:53 PM
I have figured out the problem and the solution. The problem si that Hive reads parquet files in partitions by actual schema definition of the table and Impala (I assume) reads by position. The table had some old partitions created under different schema, the column name were different. But the total number of columns and the position remained the same. the old table CREATE TABLE test ( event varchar(10), event_id int, event_time timestamp ); .. some partitions inserted, then the column renamed to event_name. the actual table definition: CREATE TABLE test ( event_name varchar(10), event_id int, event_time timestamp ); Now, if I select all partitions in Impala, the query returns all the data correctly. So I assume that Impala ignores the column names in parquet file and tries to access the first column as event_name with type varchar(10). But SparkSQL and Beeline returns NULL for the partitions created with the old definition. I downloaded the parquet files and evaluated the schema with parquet-tools and the columns are the old column names. So to test whether the column was renamed in a table I created a simple script. It is important to run spark with mergeSchema true parameter, to read ALL the schema definitions from the table. spark-shell --conf spark.sql.parquet.mergeSchema=true
import scala.util.matching.Regex
def test( tbl : String ) = {
val tb_md = sqlContext.sql("show create table "+tbl).collect()
val ddl = tb_md.map( x => x.get(0).toString ).mkString(" ")
val pattern = new Regex("""LOCATION\s*\'(.+)\'\s+TBLPROPERTIES""")
var loc:String = ""
try {
loc = (pattern findAllIn ddl).matchData.next.group(1)
} catch {
case e: Exception => //error
}
var d = sqlContext.read.parquet(loc )
val columns_parq = d.columns
println( columns_parq.toList )
println( "Table " + tbl + " has " + columns_parq.length + " columns.")
}
... View more