Member since
09-09-2016
9
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3015 | 05-21-2018 08:56 AM | |
5061 | 03-08-2016 03:16 PM |
05-21-2018
08:56 AM
Hi, I've just found a bug in Impala that is resolved in CDH 5.13.2 and I think it's the cause of the fatal error: IMPALA-6291 Thank you all anyway!
... View more
05-17-2018
04:19 AM
Hi,
We've added a new node to a CDH 5.13.1 cluster and now the impala daemon of the new node is failling with IMPALAD_UNEXPECTED_EXITS. The following messages are shown in the log file:
F0517 12:14:30.510576 29320 llvm-codegen.cc:112] LLVM hit fatal error: Cannot select: 0x3aedbd10: ch = store<ST1[%sunkaddr29]> 0x47d56be0:1, 0x47d55e40, 0x1aace980, undef:i64
0x47d55e40: i1 = or 0x47d56be0, 0x47d56390
0x47d56be0: i1,ch = load<LD1[%sunkaddr29]> 0xbfd9830, 0x1aace980, undef:i64
0x1aace980: i64 = add 0x47d565f0, Constant:i64<32>
0x47d565f0: i64,ch = CopyFromReg 0xbfd9830, Register:i64 %vreg0
0x47d56260: i64 = Register %vreg0
0x47d574c0: i64 = Constant<32>
0x47d57130: i64 = undef
0x47d56390: i1 = truncate 0x47d555f0
0x47d555f0: i32 = srl 0x3aedb4c0, Constant:i8<8>
0x3aedb4c0: i32 = any_extend 0x47d55980
0x47d55980: i16,ch = CopyFromReg 0xbfd9830, Register:i16 %vreg8
0x480ae000: i16 = Register %vreg8
0x3aedbbe0: i8 = Constant<8>
0x1aace980: i64 = add 0x47d565f0, Constant:i64<32>
0x47d565f0: i64,ch = CopyFromReg 0xbfd9830, Register:i64 %vreg0
0x47d56260: i64 = Register %vreg0
0x47d574c0: i64 = Constant<32>
0x47d57130: i64 = undef
In function: _ZN6impala26PartitionedAggregationNode22ProcessBatchNoGroupingEPNS_8RowBatchE.12
*** Check failure stack trace: ***
@ 0x1b9c2bd google::LogMessage::Fail()
@ 0x1b9db62 google::LogMessage::SendToLog()
@ 0x1b9bc97 google::LogMessage::Flush()
@ 0x1b9f25e google::LogMessageFatal::~LogMessageFatal()
@ 0xc06f5a (unknown)
@ 0x1ae5553 llvm::report_fatal_error()
@ 0x1ae56fe llvm::report_fatal_error()
@ 0x11e03bb llvm::SelectionDAGISel::CannotYetSelect()
@ 0x11e15db llvm::SelectionDAGISel::SelectCodeCommon()
@ 0xffe787 (unknown)
@ 0x11dec04 llvm::SelectionDAGISel::DoInstructionSelection()
@ 0x11e49c2 llvm::SelectionDAGISel::CodeGenAndEmitDAG()
@ 0x11e871a llvm::SelectionDAGISel::SelectAllBasicBlocks()
@ 0x11e9a5f llvm::SelectionDAGISel::runOnMachineFunction()
@ 0x10027b4 (unknown)
@ 0x1a7ea8a llvm::FPPassManager::runOnFunction()
@ 0x1a7f053 llvm::legacy::PassManagerImpl::run()
@ 0x1733409 llvm::MCJIT::emitObject()
@ 0x17338d1 llvm::MCJIT::generateCodeForModule()
@ 0x1730440 llvm::MCJIT::finalizeObject()
@ 0xc0fc3d impala::LlvmCodeGen::FinalizeModule()
@ 0xa52bb6 impala::FragmentInstanceState::Open()
@ 0xa541cb impala::FragmentInstanceState::Exec()
@ 0xa30ad8 impala::QueryState::ExecFInstance()
@ 0xbd4812 impala::Thread::SuperviseThread()
@ 0xbd4f74 boost::detail::thread_data<>::run()
@ 0xe6122a (unknown)
@ 0x7f18339bde25 start_thread
@ 0x7f18336eb34d __clone
Picked up JAVA_TOOL_OPTIONS:
Wrote minidump to /var/log/impala-minidumps/impalad/ba97cae4-eb33-44ee-f569ebad-67095a51.dmp
We have stopped the new impala daemon in order to resolve the incident. The new node is different hardware than the current cluster nodes but all the configuration is deployed by Cloudera Manager 5.13.3.
Can you help us to determine the cause of the problem?
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
09-11-2016
11:56 PM
Hi Cloudera, thank you for your reply. As you said it should be working but surprisingly it's not. No, I'm doing nothing between declaring the environment variable and launching the hadoop command, and I did it in the same shell session. May be you can check it in your lab. The only thing I've noted is that the java commands launched underneath are slightly different: When I launch the "hadoop fs -ls" command setting the environment variable HADOOP_CLIENT_OPTS, I see this: /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx1000m -Dhadoop.log.dir=/xxx/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/xxx/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/xxx/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dfs.permissions.umask-mode=007 -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.fs.FsShell -ls When I launch the "hadoop fs -Dfs.permissions.umask-mode=007 -ls" command, I see this: /usr/java/jdk1.7.0_67-cloudera/bin/java -Xmx1000m -Dhadoop.log.dir=/xxx/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/xxx/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/xxx/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.fs.FsShell -Dfs.permissions.umask-mode=007 -ls As you can see the declaration of the umask property goes before the FsShell class in the first case and after the main class in the second case. I guess that it could be ultimately the cause of the different behavior. May be the code should be changed from "$HADOOP_OPTS $CLASS" to "$CLASS $HADOOP_OPTS ?
... View more
09-09-2016
01:58 AM
Hi, I wanted to change from a specific user a different from the default umask mode. Thus I've established the hadoop client options environment variable this way: export HADOOP_CLIENT_OPTS="-D fs.permissions.umask-mode=007" I've notice that it don't enforce the umask mode and the default umask mode (022) is still in use creating files with permission mask rw-r--r--. However, if I execute the following hadoop fs -put command it use the new umask mode specified and the new file is created as expected with permissions rw-rw---- instead of rw-r--r--: hadoop fs -Dfs.permissions.umask-mode=007 -put file /directory I wouldn't like to change all the commands that the user execute and it's better to establish the environment variable because it's transparent to the user. Why the options in HADOOP_CLIENT_OPTS aren't taken into account? The instalation where it all ocurrs is a Cloudera Enterprise 5.7.1 with CDH 5.7.1. Thank you.
... View more
Labels:
03-08-2016
03:16 PM
2 Kudos
Hello Laurent, It's true that you can access to the JMX values via the /jmx path but you have to enable the JMX remote access and especify the TCP port. If you deploy via Ambari you have to set in yarn-env template for example the following line: export YARN_RESOURCEMANAGER_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=8001" You have to point the Jconsole to that port. I hope this helps. Regards.
... View more