Member since
11-21-2019
28
Posts
1
Kudos Received
0
Solutions
09-21-2021
07:59 AM
Thanks @Shelton. Reading I found these limitations: - Replicating to and from HDP to Cloudera Manager 7.x is not supported by Replication Manager. * The only option I saw: - Use DistCp to replicate data. - Hive external tables. For information to replicate data, contact Cloudera Support =>? https://docs.cloudera.com/cdp/latest/data-migration/topics/cdpdc-compatibility-matrix-bdr.html
... View more
09-21-2021
05:47 AM
Thanks for your links! I want to explain the situation better: 1) The CDP Private Cloud Base 7.1.5 platform is already installed 2) In this platform (the new one) Hive 3.1 is installed (Here I need to place the databases that are in version 1.2 Hive) 3) There is another HDP platform, the old one that has Hive 1.2 databases 4) The new CDP-PvCB platform has kerberos and TLS / SSL enabled The goal is to move the databases from Hive 1.2 (previous platform) to the new platform that has Hive 3.1 Some other document or application to perform this task Thanks for your support!
... View more
09-21-2021
05:45 AM
@Shelton, Thanks for your links, they are helpful. I want to explain the situation better: 1) The CDP Private Cloud Base 7.1.5 platform is already installed. 2) In this platform (the new one) Hive 3.1 is installed (Here I need to place the databases that are in version 1.2 Hive) 3) There is another HDP platform, the old one that has Hive 1.2 databases 4) The new CDP-PvCB platform has kerberos and TLS / SSL enabled The goal is to move the databases from Hive 1.2 (previous platform) to the new platform that has Hive 3.1 Thanks for you support!
... View more
09-20-2021
11:45 AM
Greetings to the community. I have this problem. *SOURCE PLATFORM:: HDP 2.6.5 Hive 1.2.1000 *DESTINATION PLATFORM:: -CDP Private Cloud Base 7.1.5 Hive 3.1.3000.7.1.5.0-257 I need migrate any DataBase from Hive 1.2 to Hive 3.1 How can I do this work?, What I need to do? Thank you for the support you can give me
... View more
Labels:
08-27-2021
12:11 PM
Thanks you very much @Shelton. You are the best!
... View more
08-27-2021
11:34 AM
Hi Community, I need you support. I have 3 node-managers and after rebooting 1 node-manager does not start. (RedHat 7.6 / Hadoop 2.6.5 / Ambari 2.5.1.0). Added the full log: https://drive.google.com/file/d/1Osz9nV5PSBAeu8eiiYsJOsib0h3E3Gjk/view?usp=sharing ---------------------------------------------------------- FATAL nodemanager.NodeManager (NodeManager.java:initAndStartNodeManager(549)) - Error starting NodeManager org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to start. at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:116) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:302) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:547) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:594) Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:402) at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:100) ... 6 more Caused by: java.net.BindException: Port in use: 0.0.0.0:8042 at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:983) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1006) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1063) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:920) at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:398) ... 7 more Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:971) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1002) ... 10 more 2021-08-27 14:03:48,701 INFO nodemanager.NodeManager (LogAdapter.java:info(45)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NodeManager at tcolp062.localdomain/10.161.174.132 ************************************************************/ Community Thanks!!!!!
... View more
Labels:
08-18-2021
09:57 AM
Thanks @willx for you support!
... View more
- Tags:
- ks
08-05-2021
10:05 AM
Hi Community Team. I´m configuring Distcp between secure clusters in different kerberos realms. Source= HDP 2.6.5 Target= CDP Private Cloud Base 7.1.5 I am setting up, but in the process I have doubts as to how these 2 steps are done, I add the links that show little detail 1)First doubt (what is the step by step of this): https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/scaling-namespaces/topics/hdfs-distcp-truststore-properties.html 2)Second doubt (what is the step by step of this): https://docs.cloudera.com/cdp-private-cloud-base/7.1.5/scaling-namespaces/topics/hdfs-distcp-set-hadoop-conf.html thanking the help
... View more
06-03-2021
11:45 AM
@Tylenol, Thank you very much for your help.
... View more
06-03-2021
11:44 AM
@vidanimegh, Thank you very much for your help.
... View more
05-11-2021
10:46 AM
Hi @Tylenol ****Console command: hadoop distcp hdfs://server4.localdomain:8020/tmp/distcp_test.txt hdfs://server8.local:8020/tmp ****NOTE: server4 (source-HDP-kerberos) and server8(target-CDP-non-kerberos) = NameNodes *************ERROR **************** Java config name: null Native config name: /etc/krb5.conf Loaded from native config >>>KinitOptions cache name is /tmp/krb5cc_11259 21/05/11 13:34:10 ERROR tools.DistCp: Invalid arguments: java.io.IOException: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "server4.localdomain/10.x.x.x"; destination host is: "server8.local":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558) at org.apache.hadoop.ipc.Client.call(Client.java:1498) at org.apache.hadoop.ipc.Client.call(Client.java:1398) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:818) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2165) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447) at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:227) at org.apache.hadoop.tools.DistCp.run(DistCp.java:118) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:462) Caused by: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections. at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:787) at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 22 more Invalid arguments: Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is: "server4.localdomain/10.x.x.x.x"; destination host is: "server8.local":8020; *********************************************** Thanks!
... View more
05-11-2021
08:38 AM
Hi @Tylenol it is not a migration. It is a new installation and I need to copy information from the HDP cluster to the CDP. Thanks
... View more
05-11-2021
06:26 AM
Source Server = kerberos By console::: hadoop distcp hdfs://svr2.localdomain:1019/tmp/distcp_test.txt hdfs://svr1.local:9866/tmp ***************ERROR:::::::****************** Java config name: null Native config name: /etc/krb5.conf Loaded from native config >>>KinitOptions cache name is /tmp/krb5cc_11259 >>>DEBUG <CCacheInputStream> client principal is useradm/admin@LOCALDOMAIN >>>DEBUG <CCacheInputStream> server principal is krbtgt/LOCALDOMAIN@LOCALDOMAIN >>>DEBUG <CCacheInputStream> key type: 18 >>>DEBUG <CCacheInputStream> auth time: Tue May 11 08:24:10 VET 2021 >>>DEBUG <CCacheInputStream> start time: Tue May 11 08:24:10 VET 2021 >>>DEBUG <CCacheInputStream> end time: Wed May 12 08:24:10 VET 2021 >>>DEBUG <CCacheInputStream> renew_till time: Tue May 18 08:24:10 VET 2021 >>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; >>>DEBUG <CCacheInputStream> client principal is useradm/admin@LOCALDOMAIN >>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/LOCALDOMAIN@LOCALDOMAIN@LOCALDOMAIN >>>DEBUG <CCacheInputStream> key type: 0 >>>DEBUG <CCacheInputStream> auth time: Wed Dec 31 20:00:00 VET 1969 >>>DEBUG <CCacheInputStream> start time: null >>>DEBUG <CCacheInputStream> end time: Wed Dec 31 20:00:00 VET 1969 >>>DEBUG <CCacheInputStream> renew_till time: null >>> CCacheInputStream: readFlags() 21/05/11 09:15:33 WARN ipc.Client: Exception encountered while connecting to the server : java.io.EOFException 21/05/11 09:15:33 ERROR tools.DistCp: Invalid arguments: java.io.IOException: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "svr1.localdomain/10.x.x.x"; destination host is: "svr2.locall":9866; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558) at org.apache.hadoop.ipc.Client.call(Client.java:1498) at org.apache.hadoop.ipc.Client.call(Client.java:1398) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:818) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2165) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447) at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:227) at org.apache.hadoop.tools.DistCp.run(DistCp.java:118) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:462) Caused by: java.io.IOException: java.io.EOFException at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:720) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770) at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 22 more Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757) ... 25 more Invalid arguments: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "svr1.localdomain/10.x.x.x"; destination host is: "svr2.locall":9866; usage: distcp OPTIONS [source_path...] <target_path> ******************************************* more information::: 1) Ports= 9866 ( dfs.datanode.address ) - OPEN Port 2) Ports=1019 ( dfs.datanode.address ) - OPEN Port 3) svr1.localdomain = Kerberos-Enabled (Source - Source Files) 4) svr2.locall = non-Kerberos (Target - Destination Files) Thanks
... View more
05-09-2021
04:42 PM
Hi Community I am trying to copy HDFS from an HDP 2.6.x cluster (kerberized) to a CDP Private Cloud Base 7.1.5 cluster (not kerberized) and using other ports as well, it gives me an error So I write the command in console hadoop distcp hdfs://svr2.localdomain:1019/tmp/distcp_test.txt hdfs://svr1.local:9866/tmp/ what could be the origin of the fault? Thank you ***********ERROR************************* Java config name: null Native config name: /etc/krb5.conf Loaded from native config >>>KinitOptions cache name is /tmp/krb5cc_11259 >>>DEBUG <CCacheInputStream> client principal is useradm/admin@LOCALDOMAIN >>>DEBUG <CCacheInputStream> server principal is krbtgt/LOCALDOMAIN@LOCALDOMAIN >>>DEBUG <CCacheInputStream> key type: 18 >>>DEBUG <CCacheInputStream> auth time: Sun May 09 18:39:04 VET 2021 >>>DEBUG <CCacheInputStream> start time: Sun May 09 18:39:04 VET 2021 >>>DEBUG <CCacheInputStream> end time: Mon May 10 18:39:04 VET 2021 >>>DEBUG <CCacheInputStream> renew_till time: Sun May 16 18:39:04 VET 2021 >>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; >>>DEBUG <CCacheInputStream> client principal is useradm/admin@LOCALDOMAIN >>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/LOCALDOMAIN@LOCALDOMAIN@LOCALDOMAIN >>>DEBUG <CCacheInputStream> key type: 0 >>>DEBUG <CCacheInputStream> auth time: Wed Dec 31 20:00:00 VET 1969 >>>DEBUG <CCacheInputStream> start time: null >>>DEBUG <CCacheInputStream> end time: Wed Dec 31 20:00:00 VET 1969 >>>DEBUG <CCacheInputStream> renew_till time: null >>> CCacheInputStream: readFlags() 21/05/09 19:23:36 WARN ipc.Client: Exception encountered while connecting to the server : java.io.EOFException 21/05/09 19:23:36 ERROR tools.DistCp: Invalid arguments: java.io.IOException: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "svr2.localdomain/10.x.x.x"; destination host is: "svr1.local":9866; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558) at org.apache.hadoop.ipc.Client.call(Client.java:1498) at org.apache.hadoop.ipc.Client.call(Client.java:1398) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:818) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185) at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2165) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1442) at org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1438) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447) at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:227) at org.apache.hadoop.tools.DistCp.run(DistCp.java:118) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:462) Caused by: java.io.IOException: java.io.EOFException at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:720) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770) at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620) at org.apache.hadoop.ipc.Client.call(Client.java:1451) ... 22 more Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757) ... 25 more Invalid arguments: Failed on local exception: java.io.IOException: java.io.EOFException; Host Details : local host is: "server2.localdomain/10.x.x.x"; destination host is: "svr1.local":9866; Thanks!
... View more
Labels:
05-02-2021
02:04 PM
1 Kudo
1) Check what is the type of scheduler in your YARN config * By Default CDP PvC Base 7.x uses "Capacity Scheduler" 2) If it is Fair Scheduler, change it to Capacity Scheduler, restart the services and observe whether the error with Impala is gone. * did not apply 3) In case it is Fair Scheduler and changing the scheduler is not something you want to do, then configure Queue placement policies as shown here, Restart the services and observe whether the errors in Impala have gone. 3.1) In all Data Nodes, in configuration apply this configuration (Queue placement policies): Impala Daemon Fair Scheduler Advanced Configuration Snippet (Safety Valve) Configuration Snippet (Safety Valve) ************************add this xml code===> <?xml version="1.0"?> <allocations> <queue name="root"> <minResources>10000 mb,0vcores</minResources> <maxResources>90000 mb,0vcores</maxResources> <maxRunningApps>50</maxRunningApps> <weight>2.0</weight> <schedulingPolicy>fair</schedulingPolicy> <queue name="default"> <aclSubmitApps>root</aclSubmitApps> <minResources>5000 mb,0vcores</minResources> </queue> </queue> <user name="root"> <maxRunningApps>30</maxRunningApps> </user> <userMaxAppsDefault>5</userMaxAppsDefault> <queuePlacementPolicy> <rule name="specified" /> <rule name="primaryGroup" create="false" /> <rule name="default" /> </queuePlacementPolicy> </allocations> ************************add this xml code===> Thank!!! you very much for your support @vidanimegh
... View more
04-29-2021
09:21 AM
What test could i do? The yarn does not report an error How do I isolate the error?
... View more
04-29-2021
06:54 AM
Hi Megh, Thanks for you attention.... 🙂 path:/var/log/impalad ********LOG: impala.FATAL******** Log file created at: 2021/04/28 16:12:40 Running on machine: server121.local Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg F0428 16:12:40.727955 8559 request-pool-service.cc:148] RuntimeException: org.apache.impala.yarn.server.resourcemanager.scheduler.fair.AllocationConfig$ CAUSED BY: AllocationConfigurationException: Could get past last queue placement rule without assigning . Impalad exiting. ***************End Log impalad.FATAL************ ******LOG impalad.ERROR****** Log file created at: 2021/04/28 16:12:37 Running on machine: server121.local Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg E0428 16:12:37.883814 8559 logging.cc:148] stderr will be logged to this file. log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender]. java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198) at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327) at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66) at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72) at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45) at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:273) at org.apache.hadoop.fs.FileSystem.<clinit>(FileSystem.java:170) log4j:ERROR Could not instantiate appender named "redactorForRootLogger". E0428 16:12:40.726575 8559 RequestPoolService.java:257] Unable to stop AllocationFileLoaderService after failed start. Java exception follows: java.lang.IllegalStateException at com.google.common.base.Preconditions.checkState(Preconditions.java:492) at org.apache.impala.util.FileWatchService.stop(FileWatchService.java:136) at org.apache.impala.util.RequestPoolService.stopInternal(RequestPoolService.java:281) at org.apache.impala.util.RequestPoolService.start(RequestPoolService.java:255) F0428 16:12:40.727955 8559 request-pool-service.cc:148] RuntimeException: org.apache.impala.yarn.server.resourcemanager.scheduler.fair.AllocationConfig$ CAUSED BY: AllocationConfigurationException: Could get past last queue placement rule without assigning . Impalad exiting. *** Check failure stack trace: *** @ 0x2c3d9dc @ 0x2c3f281 @ 0x2c3d3b6 @ 0x2c4097d @ 0x11ec827 @ 0x1123c8c @ 0x1124a15 @ 0x125be0b @ 0xb78603 @ 0x7f5c32dde554 @ 0xbf8956 Picked up JAVA_TOOL_OPTIONS: -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf$ Wrote minidump to /var/log/impala-minidumps/impalad/507ed37b-899b-4f37-e0190da3-41495039.dmp //////////***End Log impalad.ERROR***///////// Thanks!
... View more
04-29-2021
04:58 AM
In my case, after installing spark, I see this error. I can't get it to start the impala services on my datanode servers. I think of a full deletion of impala and reinstall, but I do not do it because it should reconfigure its dependencies (hue, yarn, zeppelin, hbase). If I reinstall all "impala daemons", but they still did not start the cause is: ******* CAUSED BY: AllocationConfigurationException: Could get past last queue placement rule without assigning . Impalad exiting. ******* ::::: part of impala.error ::::::: log4j: ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender].
... View more
04-28-2021
06:52 AM
*****More details***** ************************************* [28/Apr/2021 08:18:07 +0000] 9917 MainThread redactor INFO Started launcher: /opt/cloudera/cm-agent/service/impala/impala.sh impalad impalad_flags false [28/Apr/2021 08:18:07 +0000] 9917 MainThread redactor INFO Re-exec watcher: /opt/cloudera/cm-agent/bin/cm proc_watcher 9925 [28/Apr/2021 08:18:07 +0000] 9926 MainThread redactor INFO Re-exec redactor: /opt/cloudera/cm-agent/bin/cm redactor --fds 3 5 [28/Apr/2021 08:18:07 +0000] 9926 MainThread redactor INFO Started redactor + date + date Wed Apr 28 08:18:07 -04 2021 ++ hostname -f ++ hostname -i + echo 'Running on: server1 (10.x.x.x)' ++ dirname /opt/cloudera/cm-agent/service/impala/impala.sh + cloudera_config=/opt/cloudera/cm-agent/service/impala ++ cd /opt/cloudera/cm-agent/service/impala/../common ++ pwd + cloudera_config=/opt/cloudera/cm-agent/service/common + . /opt/cloudera/cm-agent/service/common/cloudera-config.sh ++ : /opt/cloudera/cm ++ export CLOUDERA_DIR ++ set -x + source_parcel_environment + '[' '!' -z /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/meta/cdh_env.sh ']' + OLD_IFS=' ' + IFS=: + SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS) + DIRNAME_ARRAY=($PARCEL_DIRNAMES) + IFS=' ' + COUNT=1 ++ seq 1 1 + for i in '`seq 1 $COUNT`' + SCRIPT=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/meta/cdh_env.sh + PARCEL_DIRNAME=CDH-7.1.5-1.cdh7.1.5.p0.7431829 + . /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/meta/cdh_env.sh ++ CDH_DIRNAME=CDH-7.1.5-1.cdh7.1.5.p0.7431829 ++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-0.20-mapreduce ++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-0.20-mapreduce ++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-hdfs ++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-hdfs ++ export CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-ozone ++ CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-ozone ++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-httpfs ++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-httpfs ++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-mapreduce ++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-mapreduce ++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-yarn ++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-yarn ++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase ++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase ++ export CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_filesystem ++ CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_filesystem ++ export CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_connectors ++ CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_connectors ++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zookeeper ++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zookeeper ++ export CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zeppelin ++ CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zeppelin ++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive ++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive ++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue ++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue ++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/oozie ++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/oozie ++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive-hcatalog ++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive-hcatalog ++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/sentry ++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/sentry ++ export JSVC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils ++ JSVC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils ++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop/bin/hadoop ++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop/bin/hadoop ++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala ++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala ++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/solr ++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/solr ++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase-solr ++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase-solr ++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/search ++ SEARCH_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/search ++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/spark ++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/spark ++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/etc/hive-webhcat/conf.dist/webhcat-default.xml ++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/etc/hive-webhcat/conf.dist/webhcat-default.xml ++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-kms ++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-kms ++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/parquet ++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/parquet ++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/avro ++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/avro ++ export CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kafka ++ CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kafka ++ export CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/schemaregistry ++ CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/schemaregistry ++ export CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager ++ CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager ++ export CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager_ui ++ CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager_ui ++ export CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_replication_manager ++ CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_replication_manager ++ export CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/cruise_control ++ CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/cruise_control ++ export CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/knox ++ CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/knox ++ export CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kudu ++ CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kudu ++ export CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-admin ++ CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-admin ++ export CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-tagsync ++ CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-tagsync ++ export CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-usersync ++ CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-usersync ++ export CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kms ++ CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kms ++ export CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-raz ++ CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-raz ++ export CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-rms ++ CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-rms ++ export CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/atlas ++ CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/atlas ++ export CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/tez ++ CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/tez ++ export CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/phoenix ++ CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/phoenix ++ export DAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/data_analytics_studio ++ DAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/data_analytics_studio ++ export QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/queuemanager ++ QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/queuemanager ++ export CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hbase-plugin ++ CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hbase-plugin ++ export CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hive-plugin ++ CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hive-plugin ++ export CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-atlas-plugin ++ CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-atlas-plugin ++ export CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-solr-plugin ++ CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-solr-plugin ++ export CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hdfs-plugin ++ CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hdfs-plugin ++ export CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-knox-plugin ++ CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-knox-plugin ++ export CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-yarn-plugin ++ CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-yarn-plugin ++ export CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-ozone-plugin ++ CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-ozone-plugin ++ export CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kafka-plugin ++ CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kafka-plugin + locate_cdh_java_home + '[' -z '' ']' + '[' -z /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils ']' + local BIGTOP_DETECT_JAVAHOME= + for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"' + '[' -e /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome ']' + BIGTOP_DETECT_JAVAHOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome + break + '[' -z /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome ']' + . /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome ++ BIGTOP_DEFAULTS_DIR=/etc/default ++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']' ++ JAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/jdk-11' '/usr/lib/jvm/java-11-oracle') ++ OPENJAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/java-11' '/usr/lib/jvm/jdk-11' '/usr/lib64/jvm/jdk-11') ++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle') ++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk' '/usr/lib64/jvm/java-1.8.0-openjdk' '/usr/lib64/jvm/java-8-openjdk') ++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk') ++ case ${BIGTOP_JAVA_MAJOR} in ++ JAVA_HOME_CANDIDATES=(${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]} ${JAVA11_HOME_CANDIDATES[@]} ${OPENJAVA11_HOME_CANDIDATES[@]}) ++ '[' -z '' ']' ++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}' +++ ls -rvd /usr/java/jdk1.8.0_232-cloudera ++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`' ++ '[' -e /usr/java/jdk1.8.0_232-cloudera/bin/java ']' ++ export JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera ++ JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera ++ break 2 + get_java_major_version JAVA_MAJOR + '[' -z /usr/java/jdk1.8.0_232-cloudera/bin/java ']' ++ /usr/java/jdk1.8.0_232-cloudera/bin/java -version + local 'VERSION_STRING=Picked up JAVA_TOOL_OPTIONS: -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf8f3030426fd_pid{{PID}}.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh openjdk version "1.8.0_232" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode)' + local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+' + [[ Picked up JAVA_TOOL_OPTIONS: -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf8f3030426fd_pid{{PID}}.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh openjdk version "1.8.0_232" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]] + eval JAVA_MAJOR=8 ++ JAVA_MAJOR=8 + '[' 8 -lt 8 ']' + verify_java_home + '[' -z /usr/java/jdk1.8.0_232-cloudera ']' + echo JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera + export IMPALA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala + IMPALA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala + export IMPALA_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf + IMPALA_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf + export HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hadoop-conf + HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hadoop-conf + export HIVE_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hive-conf + HIVE_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hive-conf + export HBASE_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hbase-conf + HBASE_CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hbase-conf ++ replace_pid -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError '-XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf8f3030426fd_pid{{PID}}.hprof' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh ++ echo -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError '-XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf8f3030426fd_pid{{PID}}.hprof' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh ++ sed 's#{{PID}}#9925#g' + export 'JAVA_TOOL_OPTIONS=-Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf8f3030426fd_pid9925.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh' + JAVA_TOOL_OPTIONS='-Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/impala_impala-IMPALAD-1c158e9384a7bfd9b15bf8f3030426fd_pid9925.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh' + [[ -d /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/hbase-conf ]] + JDBC_JARS=/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.2.14.jre7.jar:/usr/share/java/oracle-connector-java.jar + [[ -z '' ]] + export AUX_CLASSPATH=/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.2.14.jre7.jar:/usr/share/java/oracle-connector-java.jar + AUX_CLASSPATH=/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.2.14.jre7.jar:/usr/share/java/oracle-connector-java.jar + [[ -z '' ]] + export CLASSPATH=/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.2.14.jre7.jar:/usr/share/java/oracle-connector-java.jar + CLASSPATH=/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.2.14.jre7.jar:/usr/share/java/oracle-connector-java.jar + [[ -n '' ]] + FLAG_FILE=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf/impalad_flags + USE_DEBUG_BUILD=false + replace_conf_dir + echo CONF_DIR=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD + echo CMF_CONF_DIR= + EXCLUDE_CMF_FILES=('cloudera-config.sh' 'hue.sh' 'impala.sh' 'sqoop.sh' 'supervisor.conf' 'config.zip' 'proc.json' '*.log' '*.keytab' '*jceks' '*bcfks' 'supervisor_status') ++ printf '! -name %s ' cloudera-config.sh hue.sh impala.sh sqoop.sh supervisor.conf config.zip proc.json '*.log' impala.keytab '*jceks' '*bcfks' supervisor_status + find /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD -type f '!' -path '/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/logs/*' '!' -name cloudera-config.sh '!' -name hue.sh '!' -name impala.sh '!' -name sqoop.sh '!' -name supervisor.conf '!' -name config.zip '!' -name proc.json '!' -name '*.log' '!' -name impala.keytab '!' -name '*jceks' '!' -name '*bcfks' '!' -name supervisor_status -exec perl -pi -e 's#\{\{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD#g' '{}' ';' + make_scripts_executable + find /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';' + perl -pi -e 's#\{\{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD#g' /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf/impalad_flags + perl -pi -e 's#\{\{CGROUP_ROOT_CPU}}#/sys/fs/cgroup/cpu,cpuacct#g' /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf/impalad_flags + '[' -f /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf/.htpasswd ']' + chmod 600 /var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf/.htpasswd + false + export IMPALA_BIN=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala/sbin-retail + IMPALA_BIN=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala/sbin-retail + [[ true = '' ]] + '[' impalad = impalad ']' + exec /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala/../../bin/impalad --flagfile=/var/run/cloudera-scm-agent/process/1546340529-impala-IMPALAD/impala-conf/impalad_flags WARNING: Logging before InitGoogleLogging() is written to STDERR W0428 08:18:08.336212 9925 global-flags.cc:383] Ignoring removed flag llama_callback_port W0428 08:18:08.336370 9925 global-flags.cc:379] Ignoring removed flag enable_rm W0428 08:18:08.336386 9925 global-flags.cc:373] Ignoring removed flag disable_admission_control Redirecting stderr to /var/log/impalad/impalad.ERROR ******************************************************************************************* *************/var/log/impalad/impalad.ERROR **************************************** Log file created at: 2021/04/28 08:18:08 Running on machine: server1.local Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg E0428 08:18:08.337855 9925 logging.cc:148] stderr will be logged to this file. log4j:ERROR Could not instantiate class [org.cloudera.log4j.redactor.RedactorAppender]. java.lang.ClassNotFoundException: org.cloudera.log4j.redactor.RedactorAppender at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198) at org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327) at org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66) at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72) at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45) at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:273) at org.apache.hadoop.fs.FileSystem.<clinit>(FileSystem.java:170) log4j:ERROR Could not instantiate appender named "redactorForRootLogger". E0428 08:18:11.163729 9925 RequestPoolService.java:257] Unable to stop AllocationFileLoaderService after failed start. Java exception follows: java.lang.IllegalStateException at com.google.common.base.Preconditions.checkState(Preconditions.java:492) at org.apache.impala.util.FileWatchService.stop(FileWatchService.java:136) at org.apache.impala.util.RequestPoolService.stopInternal(RequestPoolService.java:281) at org.apache.impala.util.RequestPoolService.start(RequestPoolService.java:255) F0428 08:18:11.165186 9925 request-pool-service.cc:148] RuntimeException: org.apache.impala.yarn.server.resourcemanager.$ CAUSED BY: AllocationConfigurationException: Could get past last queue placement rule without assigning . Impalad exiting. *** Check failure stack trace: *** @ 0x2c3d9dc @ 0x2c3f281 @ 0x2c3d3b6 @ 0x2c4097d @ 0x11ec827 @ 0x1123c8c @ 0x1124a15 @ 0x125be0b @ 0xb78603 @ 0x7fe694786554 @ 0xbf8956 Picked up JAVA_TOOL_OPTIONS: -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/impala_impal$ Wrote minidump to /var/log/impala-minidumps/impalad/4f5f61c9-1297-4000-da909ea9-e4a71b29.dmp
... View more
04-27-2021
04:36 PM
I have 5 datanode where impala-demons was installed and in all it shows me this error: ************************************************************************************** log4j:ERROR Could not instantiate appender named "redactorForRootLogger". E0426 13:25:49.818779 12933 RequestPoolService.java:257] Unable to stop AllocationFileLoaderService after failed start. Java exception follows: java.lang.IllegalStateException at com.google.common.base.Preconditions.checkState(Preconditions.java:492) at org.apache.impala.util.FileWatchService.stop(FileWatchService.java:136) at org.apache.impala.util.RequestPoolService.stopInternal(RequestPoolService.java:281) at org.apache.impala.util.RequestPoolService.start(RequestPoolService.java:255) F0426 13:25:49.820109 12933 request-pool-service.cc:148] RuntimeException: org.apache.impala.yarn.server.resourcemanager.schedule$ CAUSED BY: AllocationConfigurationException: Could get past last queue placement rule without assigning . Impalad exiting. ************************************************************************************** *platform=CDP Private Cloud 7.1.5 What tasks can I do to solve this error? Thanking this community for the support they can give me.
... View more
Labels:
- Labels:
-
Apache Impala
04-20-2021
06:37 AM
Hi Damin Sue, thanks for you support. Path process-log: /run/cloudera-scm-agent/process/1546338320-HUE-test-db-connection/logs file: 1546338320-HUE-test-db-connection Contents: "stderr.log" ********************** [18/Apr/2021 20:08:03 +0000] 53855 MainThread redactor INFO Started launcher: /opt/cloudera/cm-agent/service/hue/hue.sh is_db_alive [18/Apr/2021 20:08:03 +0000] 53855 MainThread redactor ERROR Redaction rules file doesn't exist, not redacting logs. file: redaction-rules.json, directory: /run/cloudera-scm-agent/process/1546338320-HUE-test-db-connection [18/Apr/2021 20:08:03 +0000] 53855 MainThread redactor INFO Re-exec watcher: /opt/cloudera/cm-agent/bin/cm proc_watcher 53864 + date + date Sun Apr 18 20:08:03 -04 2021 ++ dirname /opt/cloudera/cm-agent/service/hue/hue.sh + cloudera_config=/opt/cloudera/cm-agent/service/hue ++ cd /opt/cloudera/cm-agent/service/hue/../common ++ pwd + cloudera_config=/opt/cloudera/cm-agent/service/common + . /opt/cloudera/cm-agent/service/common/cloudera-config.sh ++ : /opt/cloudera/cm ++ export CLOUDERA_DIR ++ set -x + source_parcel_environment + '[' '!' -z /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/meta/cdh_env.sh ']' + OLD_IFS=' ' + IFS=: + SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS) + DIRNAME_ARRAY=($PARCEL_DIRNAMES) + IFS=' ' + COUNT=1 ++ seq 1 1 + for i in '`seq 1 $COUNT`' + SCRIPT=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/meta/cdh_env.sh + PARCEL_DIRNAME=CDH-7.1.5-1.cdh7.1.5.p0.7431829 + . /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/meta/cdh_env.sh ++ CDH_DIRNAME=CDH-7.1.5-1.cdh7.1.5.p0.7431829 ++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-0.20-mapreduce ++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-0.20-mapreduce ++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-hdfs ++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-hdfs ++ export CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-ozone ++ CDH_OZONE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-ozone ++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-httpfs ++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-httpfs ++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-mapreduce ++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-mapreduce ++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-yarn ++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-yarn ++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase ++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase ++ export CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_filesystem ++ CDH_HBASE_FILESYSTEM_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_filesystem ++ export CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_connectors ++ CDH_HBASE_CONNECTORS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase_connectors ++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zookeeper ++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zookeeper ++ export CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zeppelin ++ CDH_ZEPPELIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/zeppelin ++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive ++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive ++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue ++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue ++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/oozie ++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/oozie ++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop ++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive-hcatalog ++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hive-hcatalog ++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/sentry ++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/sentry ++ export JSVC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils ++ JSVC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils ++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop/bin/hadoop ++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop/bin/hadoop ++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala ++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/impala ++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/solr ++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/solr ++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase-solr ++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hbase-solr ++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/search ++ SEARCH_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/search ++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/spark ++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/spark ++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/etc/hive-webhcat/conf.dist/webhcat-default.xml ++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/etc/hive-webhcat/conf.dist/webhcat-default.xml ++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-kms ++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hadoop-kms ++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/parquet ++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/parquet ++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/avro ++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/avro ++ export CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kafka ++ CDH_KAFKA_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kafka ++ export CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/schemaregistry ++ CDH_SCHEMA_REGISTRY_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/schemaregistry ++ export CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager ++ CDH_STREAMS_MESSAGING_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager ++ export CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager_ui ++ CDH_STREAMS_MESSAGING_MANAGER_UI_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_messaging_manager_ui ++ export CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_replication_manager ++ CDH_STREAMS_REPLICATION_MANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/streams_replication_manager ++ export CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/cruise_control ++ CDH_CRUISE_CONTROL_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/cruise_control ++ export CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/knox ++ CDH_KNOX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/knox ++ export CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kudu ++ CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/kudu ++ export CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-admin ++ CDH_RANGER_ADMIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-admin ++ export CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-tagsync ++ CDH_RANGER_TAGSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-tagsync ++ export CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-usersync ++ CDH_RANGER_USERSYNC_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-usersync ++ export CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kms ++ CDH_RANGER_KMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kms ++ export CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-raz ++ CDH_RANGER_RAZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-raz ++ export CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-rms ++ CDH_RANGER_RMS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-rms ++ export CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/atlas ++ CDH_ATLAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/atlas ++ export CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/tez ++ CDH_TEZ_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/tez ++ export CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/phoenix ++ CDH_PHOENIX_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/phoenix ++ export DAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/data_analytics_studio ++ DAS_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/data_analytics_studio ++ export QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/queuemanager ++ QUEUEMANAGER_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/queuemanager ++ export CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hbase-plugin ++ CDH_RANGER_HBASE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hbase-plugin ++ export CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hive-plugin ++ CDH_RANGER_HIVE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hive-plugin ++ export CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-atlas-plugin ++ CDH_RANGER_ATLAS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-atlas-plugin ++ export CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-solr-plugin ++ CDH_RANGER_SOLR_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-solr-plugin ++ export CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hdfs-plugin ++ CDH_RANGER_HDFS_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-hdfs-plugin ++ export CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-knox-plugin ++ CDH_RANGER_KNOX_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-knox-plugin ++ export CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-yarn-plugin ++ CDH_RANGER_YARN_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-yarn-plugin ++ export CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-ozone-plugin ++ CDH_RANGER_OZONE_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-ozone-plugin ++ export CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kafka-plugin ++ CDH_RANGER_KAFKA_PLUGIN_HOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/ranger-kafka-plugin + locate_cdh_java_home + '[' -z '' ']' + '[' -z /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils ']' + local BIGTOP_DETECT_JAVAHOME= + for candidate in '"${JSVC_HOME}"' '"${JSVC_HOME}/.."' '"/usr/lib/bigtop-utils"' '"/usr/libexec"' + '[' -e /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome ']' + BIGTOP_DETECT_JAVAHOME=/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome + break + '[' -z /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome ']' + . /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/bigtop-utils/bigtop-detect-javahome ++ BIGTOP_DEFAULTS_DIR=/etc/default ++ '[' -n /etc/default -a -r /etc/default/bigtop-utils ']' ++ JAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/jdk-11' '/usr/lib/jvm/java-11-oracle') ++ OPENJAVA11_HOME_CANDIDATES=('/usr/java/jdk-11' '/usr/lib/jvm/java-11' '/usr/lib/jvm/jdk-11' '/usr/lib64/jvm/jdk-11') ++ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle') ++ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk' '/usr/lib64/jvm/java-1.8.0-openjdk' '/usr/lib64/jvm/java-8-openjdk') ++ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk') ++ case ${BIGTOP_JAVA_MAJOR} in ++ JAVA_HOME_CANDIDATES=(${JAVA8_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]} ${JAVA11_HOME_CANDIDATES[@]} ${OPENJAVA11_HOME_CANDIDATES[@]}) ++ '[' -z '' ']' ++ for candidate_regex in '${JAVA_HOME_CANDIDATES[@]}' +++ ls -rvd /usr/java/jdk1.8.0_232-cloudera ++ for candidate in '`ls -rvd ${candidate_regex}* 2>/dev/null`' ++ '[' -e /usr/java/jdk1.8.0_232-cloudera/bin/java ']' ++ export JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera ++ JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera ++ break 2 + get_java_major_version JAVA_MAJOR + '[' -z /usr/java/jdk1.8.0_232-cloudera/bin/java ']' ++ /usr/java/jdk1.8.0_232-cloudera/bin/java -version + local 'VERSION_STRING=openjdk version "1.8.0_232" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode)' + local 'RE_JAVA=[java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+' + [[ openjdk version "1.8.0_232" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode) =~ [java|openjdk][[:space:]]version[[:space:]]\"1\.([0-9][0-9]*)\.?+ ]] + eval JAVA_MAJOR=8 ++ JAVA_MAJOR=8 + '[' 8 -lt 8 ']' + verify_java_home + '[' -z /usr/java/jdk1.8.0_232-cloudera ']' + echo JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera + export HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/1546338320-HUE-test-db-connection + HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/1546338320-HUE-test-db-connection + export HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/1546338320-HUE-test-db-connection/hadoop-conf + HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/1546338320-HUE-test-db-connection/hadoop-conf + '[' python27_version_check = is_db_alive ']' + '[' -n 7 ']' + '[' 7 -ge 6 ']' + python27_version_check + python27_exists=0 + python_vers=("/usr/bin" "/usr/local/python27/bin" "/opt/rh/python27/root/usr/bin") + for binpath in '${python_vers[@]}' + pybin=/usr/bin/python2.7 + '[' '!' -e /usr/bin/python2.7 ']' + [[ /usr/bin == \/\o\p\t\/\r\h\/\p\y\t\h\o\n\2\7* ]] ++ run_python /usr/bin/python2.7 ++ /usr/bin/python2.7 --version ++ '[' '!' 0 -eq 0 ']' ++ echo 0 + out=0 + '[' 0 -eq 0 ']' + export PATH=/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/kerberos/bin + PATH=/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/kerberos/bin + python27_exists=1 + break + '[' 1 -eq 0 ']' + '[' -e /usr/share/oracle/instantclient/lib ']' + add_postgres_to_pythonpath + grep -q '^\s*engine\s*=\s*postgres\+' hue.ini + '[' 7 -ge 6 ']' ++ dirname /opt/cloudera/cm-agent/service/hue/hue.sh + psycopg2=/opt/cloudera/cm-agent/service/hue/psycopg2 + '[' -d /opt/cloudera/cm-agent/service/hue/psycopg2 ']' + '[' -z ']' ++ dirname /opt/cloudera/cm-agent/service/hue/hue.sh + export PYTHONPATH=/opt/cloudera/cm-agent/service/hue + PYTHONPATH=/opt/cloudera/cm-agent/service/hue +++ which python2.7 ++ /usr/bin/python2.7 -c 'import psycopg2 ; print psycopg2.__version__.rsplit()[0]' Traceback (most recent call last): File "<string>", line 1, in <module> File "/opt/cloudera/cm-agent/service/hue/psycopg2/__init__.py", line 50, in <module> from psycopg2._psycopg import ( # noqa ImportError: libpq.so.5: cannot open shared object file: No such file or directory + PSYCOPG2_VERSION= + version_gt 2.5.4 ++ echo 2.5.4 ++ tr ' ' '\n' ++ sort -rV ++ head -n 1 + test 2.5.4 == 2.5.4 -a 2.5.4 '!=' '' + echo 'ERROR: Unable to find psycopg2 2.5.4 or higher version. You may need to manually install it. See http://tiny.cloudera.com/cdh6-hue-psycopg2' + exit 1 *********END****************************** **I see a problem in import psycopg2. **It's installed: python-psycopg2.x86_64 2.5.1-4.el7 @rhel-7-server-rpms python2-psycogreen.noarch 1.0-1.el7 epel python3-psycopg2.x86_64 2.7.7-2.el7 epel python3-psycopg2-tests.x86_64 2.7.7-2.el7 epel python34-psycopg2.x86_64 2.7.7-2.el7 epel python34-psycopg2-tests.x86_64 2.7.7-2.el7 epel Right now I'm still looking for the solution Thanks for the support Grateful to the community
... View more
04-19-2021
12:01 PM
When I install hue with cloudera manager it stops, when searching the log it shows me a permissions error. ======log cloudera manager====== 2021-04-19 10:30:55,858 INFO CommandPusher-1:com.cloudera.server.cmf.CommandPusherThread: Acquired lease lock on DbCommand:1546338527 2021-04-19 10:30:55,860 INFO CommandPusher-1:com.cloudera.cmf.service.AbstractOneOffHostCommand: Unsuccessful 'HueTestDatabaseConnection' 2021-04-19 10:30:55,861 INFO CommandPusher-1:com.cloudera.cmf.service.AbstractDbConnectionTestCommand: Command exited with code: 1 2021-04-19 10:30:55,861 INFO CommandPusher-1:com.cloudera.cmf.service.AbstractDbConnectionTestCommand: + '[' syncdb = is_db_alive ']' + '[' ldaptest = is_db_alive ']' + exec /opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/bin/hue is_db_alive Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/bin/hue", line 9, in <module> from pkg_resources import load_entry_point File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3251, in <module> @_call_aside File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3235, in _call_aside f(*args, **kwargs) File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3264, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 574, in _build_master ws = cls() File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 567, in __init__ self.add_entry(entry) File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 623, in add_entry for dist in find_distributions(entry, True): File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2065, in find_on_path for dist in factory(fullpath): File "/opt/cloudera/parcels/CDH-7.1.5-1.cdh7.1.5.p0.7431829/lib/hue/build/env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2127, in distributions_from_metadata if len(os.listdir(path)) == 0: OSError: [Errno 13] Permission denied: '/usr/lib64/python2.7/site-packages/psycopg2-2.7.5.dist-info' 2021-04-19 10:30:55,861 ERROR CommandPusher-1:com.cloudera.cmf.model.DbCommand: Command 1546338527(HueTestDatabaseConnection) has completed. finalstate:FINISHED, success:false, msg:Unexpected error. Unable to verify database connection. 2021-04-19 10:30:55,861 INFO CommandPusher-1:com.cloudera.cmf.command.components.CommandStorage: Invoked delete temp files for command:DbCommand{id=1546338527, name=HueTestDatabaseConnection, host=server.local} at dir:/var/lib/cloudera-scm-server/temp/commands/1546338527 2021-04-19 10:30:56,210 INFO scm-web-34387:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/dbTestConn/checkConnectionResult 2021-04-19 10:30:56,213 INFO scm-web-34387:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/dbTestConn/checkConnectionResult, Status:200 2021-04-19 10:31:06,697 INFO agentServer-27149:com.cloudera.server.common.MonitoringThreadPool: agentServer: execution stats: average=15ms, min=1ms, max=40ms. 2021-04-19 10:31:06,697 INFO agentServer-27149:com.cloudera.server.common.MonitoringThreadPool: agentServer: waiting in queue stats: average=0ms, min=0ms, max=1ms. **************END - Error Installing HUE ******************************************************************** What should I do to fix and install hue? Thanks for the help community!
... View more
Labels:
11-22-2019
07:46 AM
I Have a problem, anything Base URL not funtion. For example: http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.5.0 This, happens with all "Base Url". Question? Are the repositories somewhere else now? - Before I could access and view other versions through the directory Could it be caused by the purchase between Cloudera and Hortonworks that these repositories were changed? I appreciate your help jsensharma!
... View more
11-21-2019
06:04 AM
Hello community:
I need you help please. You Know where exactly are all repository from Hadoop(2.6.x and 2.7.x and 3.x) and Ambari 2.6.x and 2.7.x). All URLs are broken. Thank for avance.
... View more
- Tags:
- Ambari
- repository
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop