Member since
12-04-2019
10
Posts
0
Kudos Received
0
Solutions
07-25-2022
11:01 AM
This image could be more easier to read.
... View more
07-25-2022
10:57 AM
Hi @rki_ , yes I have the gateway on a worker. I tried to run command on gateway instance but I get this log with fails. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/07/25 19:53:30 INFO SparkContext: Running Spark version 3.2.2
22/07/25 19:53:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/07/25 19:53:30 INFO ResourceUtils: ==============================================================
22/07/25 19:53:30 INFO ResourceUtils: No custom resources configured for spark.driver.
22/07/25 19:53:30 INFO ResourceUtils: ==============================================================
22/07/25 19:53:30 INFO SparkContext: Submitted application: main.py
22/07/25 19:53:30 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
22/07/25 19:53:30 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
22/07/25 19:53:30 INFO ResourceProfileManager: Added ResourceProfile id: 0
22/07/25 19:53:30 INFO SecurityManager: Changing view acls to: centos
22/07/25 19:53:30 INFO SecurityManager: Changing modify acls to: centos
22/07/25 19:53:30 INFO SecurityManager: Changing view acls groups to:
22/07/25 19:53:30 INFO SecurityManager: Changing modify acls groups to:
22/07/25 19:53:30 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(centos); groups with view permissions: Set(); users with modify permissions: Set(centos); groups with modify permissions: Set()
22/07/25 19:53:30 INFO Utils: Successfully started service 'sparkDriver' on port 45247.
22/07/25 19:53:30 INFO SparkEnv: Registering MapOutputTracker
22/07/25 19:53:30 INFO SparkEnv: Registering BlockManagerMaster
22/07/25 19:53:30 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/07/25 19:53:30 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/07/25 19:53:30 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
22/07/25 19:53:30 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-9b0423bb-8967-4de8-9a20-2fa0c35025ed
22/07/25 19:53:30 INFO MemoryStore: MemoryStore started with capacity 408.9 MiB
22/07/25 19:53:30 INFO SparkEnv: Registering OutputCommitCoordinator
22/07/25 19:53:30 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/07/25 19:53:30 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://ip-10-0-1-113.eu-central-1.compute.internal:4040
22/07/25 19:53:31 INFO DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032
22/07/25 19:53:31 INFO Client: Requesting a new application from cluster with 3 NodeManagers
22/07/25 19:53:31 INFO Configuration: resource-types.xml not found
22/07/25 19:53:31 INFO ResourceUtils: Unable to find 'resource-types.xml'.
22/07/25 19:53:31 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (60399 MB per container)
22/07/25 19:53:31 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
22/07/25 19:53:31 INFO Client: Setting up container launch context for our AM
22/07/25 19:53:31 INFO Client: Setting up the launch environment for our AM container
22/07/25 19:53:31 INFO Client: Preparing resources for our AM container
22/07/25 19:53:31 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/07/25 19:53:32 INFO Client: Uploading resource file:/tmp/spark-c0176ed8-8bb8-450e-8010-40b386a4b0c6/__spark_libs__5907380933161926429.zip -> file:/home/centos/.sparkStaging/application_1658770532855_0002/__spark_libs__5907380933161926429.zip
22/07/25 19:53:32 INFO Client: Uploading resource file:/usr/local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip -> file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip
22/07/25 19:53:32 INFO Client: Uploading resource file:/usr/local/lib/python3.6/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip -> file:/home/centos/.sparkStaging/application_1658770532855_0002/py4j-0.10.9.5-src.zip
22/07/25 19:53:32 INFO Client: Uploading resource file:/tmp/spark-c0176ed8-8bb8-450e-8010-40b386a4b0c6/__spark_conf__6393843623349520327.zip -> file:/home/centos/.sparkStaging/application_1658770532855_0002/__spark_conf__.zip
22/07/25 19:53:32 INFO SecurityManager: Changing view acls to: centos
22/07/25 19:53:32 INFO SecurityManager: Changing modify acls to: centos
22/07/25 19:53:32 INFO SecurityManager: Changing view acls groups to:
22/07/25 19:53:32 INFO SecurityManager: Changing modify acls groups to:
22/07/25 19:53:32 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(centos); groups with view permissions: Set(); users with modify permissions: Set(centos); groups with modify permissions: Set()
22/07/25 19:53:32 INFO Client: Submitting application application_1658770532855_0002 to ResourceManager
22/07/25 19:53:32 INFO YarnClientImpl: Submitted application application_1658770532855_0002
22/07/25 19:53:33 INFO Client: Application report for application_1658770532855_0002 (state: FAILED)
22/07/25 19:53:33 INFO Client:
client token: N/A
diagnostics: Application application_1658770532855_0002 failed 2 times due to AM Container for appattempt_1658770532855_0002_000002 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2022-07-25 19:53:32.848]File file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip does not exist
java.io.FileNotFoundException: File file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1022)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:723)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:248)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:241)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:229)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
For more detailed output, check the application tracking page: http://ip-10-0-1-113.eu-central-1.compute.internal:8088/cluster/app/application_1658770532855_0002 Then click on links to logs of each attempt.
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1658771612544
final status: FAILED
tracking URL: http://ip-10-0-1-113.eu-central-1.compute.internal:8088/cluster/app/application_1658770532855_0002
user: centos
22/07/25 19:53:33 INFO Client: Deleted staging directory file:/home/centos/.sparkStaging/application_1658770532855_0002
22/07/25 19:53:33 ERROR YarnClientSchedulerBackend: The YARN application has already ended! It might have been killed or the Application Master may have failed to start. Check the YARN application logs for more details.
22/07/25 19:53:33 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Application application_1658770532855_0002 failed 2 times due to AM Container for appattempt_1658770532855_0002_000002 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2022-07-25 19:53:32.848]File file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip does not exist
java.io.FileNotFoundException: File file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1022)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:723)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:248)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:241)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:229)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
For more detailed output, check the application tracking page: http://ip-10-0-1-113.eu-central-1.compute.internal:8088/cluster/app/application_1658770532855_0002 Then click on links to logs of each attempt.
. Failing the application.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:97)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:748)
22/07/25 19:53:33 INFO SparkUI: Stopped Spark web UI at http://ip-10-0-1-113.eu-central-1.compute.internal:4040
22/07/25 19:53:33 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
22/07/25 19:53:33 INFO YarnClientSchedulerBackend: Shutting down all executors
22/07/25 19:53:33 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
22/07/25 19:53:33 INFO YarnClientSchedulerBackend: YARN client scheduler backend Stopped
22/07/25 19:53:33 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/07/25 19:53:33 INFO MemoryStore: MemoryStore cleared
22/07/25 19:53:33 INFO BlockManager: BlockManager stopped
22/07/25 19:53:33 INFO BlockManagerMaster: BlockManagerMaster stopped
22/07/25 19:53:33 WARN MetricsSystem: Stopping a MetricsSystem that is not running
22/07/25 19:53:33 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/07/25 19:53:33 INFO SparkContext: Successfully stopped SparkContext
Traceback (most recent call last):
File "/home/centos/spark/main.py", line 12, in <module>
spark = SparkSession.builder.getOrCreate()
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/session.py", line 228, in getOrCreate
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 392, in getOrCreate
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 147, in __init__
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 209, in _do_init
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/pyspark.zip/pyspark/context.py", line 329, in _initialize_context
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1586, in __call__
File "/usr/local/lib/python3.6/site-packages/pyspark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Application application_1658770532855_0002 failed 2 times due to AM Container for appattempt_1658770532855_0002_000002 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2022-07-25 19:53:32.848]File file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip does not exist
java.io.FileNotFoundException: File file:/home/centos/.sparkStaging/application_1658770532855_0002/pyspark.zip does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1022)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:723)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:248)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:241)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:229)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
For more detailed output, check the application tracking page: http://ip-10-0-1-113.eu-central-1.compute.internal:8088/cluster/app/application_1658770532855_0002 Then click on links to logs of each attempt.
. Failing the application.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:97)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:748)
22/07/25 19:53:33 INFO ShutdownHookManager: Shutdown hook called
22/07/25 19:53:33 INFO ShutdownHookManager: Deleting directory /tmp/spark-c0176ed8-8bb8-450e-8010-40b386a4b0c6
22/07/25 19:53:33 INFO ShutdownHookManager: Deleting directory /tmp/spark-17e0ad88-9fa7-421d-913a-ffb898fe2367 I really don't understand what could be the problem
... View more
07-25-2022
07:44 AM
I'm trying to run a Yarn cluster by this command: spark-submit --deploy-mode cluster --master yarn main.py and i return this message: INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) Server address is not correct but in yarn-site.xml there is the correct FQDN. How can i solve this?
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
07-22-2022
05:37 AM
Hi averyone, I tried to run a PySpark job using Oozie by Hue but an error occured: [22/Jul/2022 03:03:48 -0700] access INFO 87.10.222.78 admin - "GET /500 HTTP/1.1" returned in 8ms 200 705
[22/Jul/2022 03:03:48 -0700] base WARNING Not Found: /jobbrowser/apps
[22/Jul/2022 03:03:48 -0700] decorators INFO args: (True,)
[22/Jul/2022 03:03:48 -0700] decorators INFO AXES: Calling decorated function: dt_login
[22/Jul/2022 03:03:48 -0700] views WARNING User admin is bypassing the load balancer
[22/Jul/2022 03:03:48 -0700] views WARNING User admin is using Hue 3 UI
[22/Jul/2022 03:03:48 -0700] access WARNING 87.10.222.78 admin - "GET /jobbrowser/apps HTTP/1.1" --- 404 not found
[22/Jul/2022 03:03:48 -0700] access INFO 87.10.222.78 admin - "POST /oozie/editor/workflow/submit/7 HTTP/1.1" returned in 1245ms 200 83
[22/Jul/2022 03:03:48 -0700] submission2 INFO Started: Submission for job '0000000-220722115331462-oozie-oozi-W'. -- 0000000-220722115331462-oozie-oozi-W
[22/Jul/2022 03:03:48 -0700] resource DEBUG PUT None http://FQDN:11000/oozie/v1/job/0000000-220722115331462-oozie-oozi-W?action=start&timezone=America%2FLos_Angeles&user.name=hue&doAs=admin <?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>user.name</name>
<value><![CDATA[admin]]></value>
</property>
</configuration>
returned in 370ms 200 0
[22/Jul/2022 03:03:47 -0700] submission2 INFO Submitted: Submission for job '0000000-220722115331462-oozie-oozi-W'. -- 0000000-220722115331462-oozie-oozi-W
[22/Jul/2022 03:03:47 -0700] resource DEBUG POST None http://FQDN:11000/oozie/v1/jobs?timezone=America%2FLos_Angeles&user.name=hue&doAs=admin <?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>dryrun</name>
<value><![CDATA[False]]></value>
</property>
<property>
<name>hue-id-w</name>
<value><![CDATA[7]]></value>
</property>
<property>
<name>jobTracker</name>
<value><![CDATA[FQDN:8032]]></value>
</property>
<property>
<name>nameNode</name>
<value><![CDATA[hdfs://FQDN:8020]]></value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value><![CDATA[True]]></value>
</property>
<property>
<name>oozie.wf.application.path</name>
<value><![CDATA[hdfs://FQDN:8020/user/hue/oozie/workspaces/hue-oozie-1658408731.88]]></value>
</property>
<property>
<name>security_enabled</name>
<value><![CDATA[False]]></value>
</property>
<property>
<name>send_email</name>
<value><![CDATA[False]]></value>
</property>
<property>
<name>user.name</name>
<value><![CDATA[admin]]></value>
</property>
</configuration>
returned in 414ms 201 45 {"id":"0000000-220722115331462-oozie-oozi-W"}
[22/Jul/2022 03:03:47 -0700] submission2 INFO Using FS <desktop.lib.fs.proxyfs.ProxyFS object at 0x7fe6fc322790> and JT None
[22/Jul/2022 03:03:47 -0700] submission2 DEBUG Created/Updated /user/hue/oozie/workspaces/hue-oozie-1658408731.88/job.properties
[22/Jul/2022 03:03:47 -0700] resource DEBUG PUT None http://FQDN:9864/webhdfs/v1/user/hue/oozie/workspaces/hue-oozie-1658408731.88/job.properties?op=CREATE&doas=admin&user.name=hue&namenoderpcaddress=FQDN:8020&createflag=&createparent=true&overwrite=true&permission=644 oozie.use.system.libpath=True
send_email=False
dryrun=False
nameNode=hdfs://FQDN:8020
jobTracker=FQDN:8032
security_enabled=False returned in 25ms 201 0
[22/Jul/2022 03:03:47 -0700] resource DEBUG PUT http://FQDN:9870/webhdfs/v1 returned in 0ms
[22/Jul/2022 03:03:47 -0700] resource ERROR Error logging return call PUT http://FQDN:9870/webhdfs/v1
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", line 122, in _invoke
resp_content = smart_unicode(resp.content, errors='replace')
AttributeError: 'NoneType' object has no attribute 'content'
[22/Jul/2022 03:03:47 -0700] submission2 DEBUG Created/Updated /user/hue/oozie/workspaces/hue-oozie-1658408731.88/workflow.xml
[22/Jul/2022 03:03:47 -0700] resource DEBUG PUT None http://FQDN:9864/webhdfs/v1/user/hue/oozie/workspaces/hue-oozie-1658408731.88/workflow.xml?op=CREATE&doas=admin&user.name=hue&namenoderpcaddress=FQDN:8020&createflag=&createparent=true&overwrite=true&permission=644 <workflow-app name="Oozie flow con PySpark" xmlns="uri:oozie:workflow:0.5">
<start to="spark-27ea"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="spark-27ea">
<spark xmlns="uri:oozie:spark-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<master>yarn</master>
<mode>client</mode>
<name></name>
<jar>main_1.py</jar>
<file>/user/admin/spark/main_1.py#main_1.py</file>
</spark>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app> returned in 287ms 201 0
[22/Jul/2022 03:03:46 -0700] http_client DEBUG Setting session adapter for http://FQDN:9864
[22/Jul/2022 03:03:46 -0700] http_client DEBUG Setting request Session
[22/Jul/2022 03:03:46 -0700] resource DEBUG PUT http://FQDN:9870/webhdfs/v1 returned in 0ms
[22/Jul/2022 03:03:46 -0700] resource ERROR Error logging return call PUT http://FQDN:9870/webhdfs/v1
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", line 122, in _invoke
resp_content = smart_unicode(resp.content, errors='replace')
AttributeError: 'NoneType' object has no attribute 'content'
[22/Jul/2022 03:03:46 -0700] resource DEBUG GET http://FQDN:9870/webhdfs/v1 returned in 0ms
[22/Jul/2022 03:03:46 -0700] resource ERROR Error logging return call GET http://FQDN:9870/webhdfs/v1
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", line 122, in _invoke
resp_content = smart_unicode(resp.content, errors='replace')
AttributeError: 'NoneType' object has no attribute 'content'
[22/Jul/2022 03:03:46 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/hue/oozie/workspaces/hue-oozie-1658408731.88?op=GETFILESTATUS&user.name=hue&doas=admin returned in 3ms 200 238 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":3,"fileId":19823,"group":"hue","length":0,"modificationTime":1658415960391,"owner":"admin","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:46 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/hue/oozie/workspaces/hue-oozie-1658408731.88?op=GETFILESTATUS&user.name=hue&doas=admin returned in 3ms 200 238 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":3,"fileId":19823,"group":"hue","length":0,"modificationTime":1658415960391,"owner":"admin","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:46 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/hue/oozie/workspaces/hue-oozie-1658408731.88?op=GETFILESTATUS&user.name=hue&doas=admin returned in 3ms 200 238 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":3,"fileId":19823,"group":"hue","length":0,"modificationTime":1658415960391,"owner":"admin","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:46 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/hue/oozie/workspaces/hue-oozie-1658408731.88?op=GETFILESTATUS&user.name=hue&doas=admin returned in 3ms 200 238 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":3,"fileId":19823,"group":"hue","length":0,"modificationTime":1658415960391,"owner":"admin","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:46 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/hue/oozie/deployments/_$USER_-oozie-$JOBID-$TIME?op=GETFILESTATUS&user.name=hue&doas=hue returned in 10ms 200 237 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":19754,"group":"hue","length":0,"modificationTime":1658408078822,"owner":"hue","pathSuffix":"","permission":"1777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:46 -0700] submission2 INFO Using FS <desktop.lib.fs.proxyfs.ProxyFS object at 0x7fe6fc322790> and JT None
[22/Jul/2022 03:03:46 -0700] http_client DEBUG Setting session adapter for http://FQDN:8088
[22/Jul/2022 03:03:46 -0700] http_client DEBUG Setting request Session
[22/Jul/2022 03:03:45 -0700] access INFO 87.10.222.78 admin - "GET /oozie/editor/workflow/submit/7 HTTP/1.1" returned in 81ms 200 5411
[22/Jul/2022 03:03:45 -0700] resource DEBUG GET None http://FQDN:11000/oozie/v1/admin/configuration?timezone=America%2FLos_Angeles&user.name=hue&doAs=admin returned in 7ms 200 48024 {"oozie.email.smtp.auth":"false","oozie.service.ELService.functions.coord-job-submit-data":"\n coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,\n coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo,\n coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo,\n coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo,\n coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo,\n coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo,\n coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coor...
[22/Jul/2022 03:03:44 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/docs/ HTTP/1.1" returned in 22ms 200 681
[22/Jul/2022 03:03:43 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/doc/ HTTP/1.1" returned in 31ms 200 1253
[22/Jul/2022 03:03:43 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/user_preferences/default_app HTTP/1.1" returned in 3ms 200 27
[22/Jul/2022 03:03:43 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/context/computes/oozie HTTP/1.1" returned in 36ms 200 228
[22/Jul/2022 03:03:43 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/context/namespaces/oozie HTTP/1.1" returned in 0ms 200 288
[22/Jul/2022 03:03:43 -0700] access INFO 87.10.222.78 admin - "GET /oozie/editor/workflow/edit HTTP/1.1" returned in 262ms 200 194745
[22/Jul/2022 03:03:43 -0700] resource DEBUG GET None http://FQDN:11000/oozie/v1/admin/configuration?timezone=America%2FLos_Angeles&user.name=hue&doAs=admin returned in 15ms 200 48024 {"oozie.email.smtp.auth":"false","oozie.service.ELService.functions.coord-job-submit-data":"\n coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,\n coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo,\n coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo,\n coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo,\n coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo,\n coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo,\n coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coor...
[22/Jul/2022 03:03:43 -0700] http_client DEBUG Setting session adapter for http://FQDN:11000
[22/Jul/2022 03:03:43 -0700] http_client DEBUG Setting request Session
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/docs/ HTTP/1.1" returned in 9ms 200 116
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "POST /notebook/api/create_session HTTP/1.1" returned in 180ms 200 72
[22/Jul/2022 03:03:39 -0700] base DEBUG Selected interpreter java interface=oozie compute=None
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "GET /desktop/workers/aceSqlSyntaxWorker.js HTTP/1.1" returned in 27ms 304 0
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "GET /desktop/workers/aceSqlLocationWorker.js HTTP/1.1" returned in 28ms 304 0
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "GET /notebook/api/get_history HTTP/1.1" returned in 7ms 200 70
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/user_preferences/default_app HTTP/1.1" returned in 3ms 200 27
[22/Jul/2022 03:03:39 -0700] access INFO 87.10.222.78 admin - "POST /notebook/api/create_notebook HTTP/1.1" returned in 0ms 200 243
[22/Jul/2022 03:03:38 -0700] access INFO 87.10.222.78 admin - "GET /editor HTTP/1.1" returned in 370ms 304 0
[22/Jul/2022 03:03:38 -0700] access INFO 87.10.222.78 admin - "POST /metadata/api/catalog/list_tags HTTP/1.1" returned in 12ms 500 308
[22/Jul/2022 03:03:38 -0700] navigator_client ERROR Failed to search for entities with search query: {"query": "((originalName:**^3)OR(originalDescription:**^1)OR(name:**^10)OR(description:**^3)OR(tags:**^5))AND((originalName:[* TO *])OR(originalDescription:[* TO *])OR(name:[* TO *])OR(description:[* TO *])OR(tags:[* TO *]))", "filterQueries": ["deleted:false"], "facetFields": ["tags"]}
[22/Jul/2022 03:03:38 -0700] resource DEBUG POST http://localhost:7187/api/v9 returned in 0ms
[22/Jul/2022 03:03:38 -0700] resource ERROR Error logging return call POST http://localhost:7187/api/v9
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hue/desktop/core/src/desktop/lib/rest/resource.py", line 122, in _invoke
resp_content = smart_unicode(resp.content, errors='replace')
AttributeError: 'NoneType' object has no attribute 'content'
[22/Jul/2022 03:03:38 -0700] navigator_client INFO {"query": "((originalName:**^3)OR(originalDescription:**^1)OR(name:**^10)OR(description:**^3)OR(tags:**^5))AND((originalName:[* TO *])OR(originalDescription:[* TO *])OR(name:[* TO *])OR(description:[* TO *])OR(tags:[* TO *]))", "filterQueries": ["deleted:false"], "facetFields": ["tags"]}
[22/Jul/2022 03:03:38 -0700] access INFO 87.10.222.78 admin - "GET /desktop/api2/doc/ HTTP/1.1" returned in 66ms 200 2463
[22/Jul/2022 03:03:38 -0700] access INFO 87.10.222.78 admin - "GET /notebook/api/get_history HTTP/1.1" returned in 8ms 304 0
[22/Jul/2022 03:03:38 -0700] access INFO 87.10.222.78 admin - "POST /notebook/api/create_notebook HTTP/1.1" returned in 0ms 200 252
[22/Jul/2022 03:03:38 -0700] access INFO 87.10.222.78 admin - "POST /desktop/api2/get_config/ HTTP/1.1" returned in 36ms 200 6173
[22/Jul/2022 03:03:37 -0700] access INFO 87.10.222.78 admin - "GET /desktop/globalJsConstants.js HTTP/1.1" returned in 48ms 304 0
[22/Jul/2022 03:03:37 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/admin?op=GETFILESTATUS&user.name=hue&doas=admin returned in 5ms 200 240 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16567,"group":"admin","length":0,"modificationTime":1658408896066,"owner":"admin","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:37 -0700] access INFO 87.10.222.78 admin - "GET /desktop/globalJsConstants.js HTTP/1.1" returned in 2737ms 304 0
[22/Jul/2022 03:03:37 -0700] resource DEBUG GET None http://FQDN:9870/webhdfs/v1//user/admin?op=GETFILESTATUS&user.name=hue&doas=admin returned in 812ms 200 240 {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":16567,"group":"admin","length":0,"modificationTime":1658408896066,"owner":"admin","pathSuffix":"","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
[22/Jul/2022 03:03:34 -0700] access INFO 87.10.222.78 admin - "GET /hue HTTP/1.1" returned in 551ms 200 234941
[22/Jul/2022 03:03:34 -0700] decorators INFO args: (True,)
[22/Jul/2022 03:03:34 -0700] decorators INFO AXES: Calling decorated function: dt_login
[22/Jul/2022 03:03:34 -0700] access INFO 87.10.222.78 admin - "GET / HTTP/1.1" returned in 0ms 302 0
[22/Jul/2022 03:03:34 -0700] views WARNING User admin is bypassing the load balancer
[22/Jul/2022 03:03:34 -0700] access INFO 87.10.222.78 admin - "GET / HTTP/1.1" returned in 33ms 302 0
[22/Jul/2022 03:03:34 -0700] webhdfs DEBUG Initializing Hadoop WebHdfs: http://FQDN:9870/webhdfs/v1 (security: False, superuser: None)
[22/Jul/2022 03:03:34 -0700] http_client DEBUG Setting session adapter for http://FQDN:9870
[22/Jul/2022 03:03:34 -0700] http_client DEBUG Setting request Session
[22/Jul/2022 03:03:34 -0700] backend INFO Augmenting users with class: <class 'desktop.auth.backend.DefaultUserAugmentor'>
[22/Jul/2022 03:03:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 6ms 200 0
[22/Jul/2022 03:03:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 03:02:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 7ms 200 0
[22/Jul/2022 03:02:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 03:01:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 6ms 200 0
[22/Jul/2022 03:01:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 03:00:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 6ms 200 0
[22/Jul/2022 03:00:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:59:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 7ms 200 0
[22/Jul/2022 02:59:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:58:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 7ms 200 0
[22/Jul/2022 02:58:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:57:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 6ms 200 0
[22/Jul/2022 02:57:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:56:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 7ms 200 0
[22/Jul/2022 02:56:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:55:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 7ms 200 0
[22/Jul/2022 02:55:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:54:20 -0700] access INFO 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" returned in 11ms 200 0
[22/Jul/2022 02:54:20 -0700] access DEBUG 10.0.2.28 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" -
[22/Jul/2022 02:53:48 -0700] middleware INFO Unloading MimeTypeJSFileFixStreamingMiddleware
[22/Jul/2022 02:53:48 -0700] middleware INFO Unloading HueRemoteUserMiddleware
[22/Jul/2022 02:53:48 -0700] middleware INFO Unloading SpnegoMiddleware
[22/Jul/2022 02:53:48 -0700] middleware INFO Unloading ProxyMiddleware
[22/Jul/2022 02:53:48 -0700] middleware INFO Unloading AuditLoggingMiddleware
[22/Jul/2022 02:53:48 -0700] runcherrypyserver INFO Starting server with options:
{'daemonize': False,
'host': 'FQDN',
'pidfile': None,
'port': 8888,
'server_group': 'hue',
'server_name': 'localhost',
'server_user': 'hue',
'ssl_certificate': None,
'ssl_certificate_chain': None,
'ssl_cipher_list': 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA',
'ssl_no_renegotiation': False,
'ssl_private_key': None,
'threads': 50,
'workdir': None}
[22/Jul/2022 02:53:47 -0700] api WARNING Oozie is not enabled
[22/Jul/2022 02:53:47 -0700] __init__ INFO Couldn't import snappy. Support for snappy compression disabled.
[22/Jul/2022 02:53:47 -0700] hiveserver2 WARNING Job Browser app is not enabled
[22/Jul/2022 02:53:47 -0700] decorators DEBUG Looking for header value HTTP_X_FORWARDED_FOR
[22/Jul/2022 02:53:47 -0700] decorators DEBUG Axes is configured to be behind reverse proxy
[22/Jul/2022 02:53:47 -0700] decorators INFO Using django-axes 2.2.0
[22/Jul/2022 02:53:47 -0700] decorators INFO AXES: BEGIN LOG
[22/Jul/2022 02:53:46 -0700] sslcompat DEBUG backports.ssl_match_hostname module is available
[22/Jul/2022 02:53:46 -0700] sslcompat DEBUG ipaddress module is available
[22/Jul/2022 02:53:45 -0700] settings DEBUG DESKTOP_DB_TEST_USER SET: hue_test
[22/Jul/2022 02:53:45 -0700] settings DEBUG DESKTOP_DB_TEST_NAME SET: /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hue/desktop/desktop-test.db
[22/Jul/2022 02:53:45 -0700] settings DEBUG Installed Django modules: DesktopModule(aws: aws),DesktopModule(azure: azure),DesktopModule(hadoop: hadoop),DesktopModule(libanalyze: libanalyze),DesktopModule(liboauth: liboauth),DesktopModule(liboozie: liboozie),DesktopModule(librdbms: librdbms),DesktopModule(libsaml: libsaml),DesktopModule(libsentry: libsentry),DesktopModule(libsolr: libsolr),DesktopModule(libzookeeper: libzookeeper),DesktopModule(Hue: desktop),DesktopModule(About: about),DesktopModule(Hive: beeswax),DesktopModule(File Browser: filebrowser),DesktopModule(Help: help),DesktopModule(Job Designer: jobsub),DesktopModule(Table Browser: metastore),DesktopModule(Oozie Editor/Dashboard: oozie),DesktopModule(Proxy: proxy),DesktopModule(RDBMS UI: rdbms),DesktopModule(User Admin: useradmin),DesktopModule(Data Importer: indexer),DesktopModule(Metadata: metadata),DesktopModule(Notebook: notebook),DesktopModule(Analytics Dashboards: dashboard),DesktopModule(Kafka: kafka) My suspicion is that it fails to http PUT and GET on those URLs because the FQDNs resolve them to a private IP address non-public IP address. What could be the problem?
... View more
Labels:
07-19-2022
11:28 AM
6:03:59.108 PM ERROR RetryingHMSHandler [main]: MetaException(message:Version information not found in metastore.) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:10110) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:10088) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy27.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:842) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:834) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:925) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:551) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10224) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10219) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:10500) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:10417) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) 6:03:59.110 PM ERROR RetryingHMSHandler [main]: HMSHandler Fatal error: MetaException(message:Version information not found in metastore.) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:10110) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:10088) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy27.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:842) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:834) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:925) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:551) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10224) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10219) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:10500) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:10417) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) 6:03:59.110 PM ERROR HiveMetaStore [main]: MetaException(message:Version information not found in metastore.) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10224) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10219) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:10500) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:10417) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:318) at org.apache.hadoop.util.RunJar.main(RunJar.java:232) Caused by: MetaException(message:Version information not found in metastore.) at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:10110) at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:10088) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy27.verifySchema(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:842) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:834) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:925) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:551) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80) ... 11 more 6:03:59.111 PM ERROR HiveMetaStore [main]: Metastore Thrift Server threw an exception... org.apache.hadoop.hive.metastore.api.MetaException: Version information not found in metastore. at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:84) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10224) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:10219) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:10500) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:10417) [hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232] at org.apache.hadoop.util.RunJar.run(RunJar.java:318) [hadoop-common-3.1.1.7.1.7.1000-141.jar:?] at org.apache.hadoop.util.RunJar.main(RunJar.java:232) [hadoop-common-3.1.1.7.1.7.1000-141.jar:?] Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Version information not found in metastore. at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:10110) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:10088) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232] at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at com.sun.proxy.$Proxy27.verifySchema(Unknown Source) ~[?:?] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:842) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:834) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:925) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:551) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_232] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_232] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_232] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_232] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80) ~[hive-exec-3.1.3000.7.1.7.1000-141.jar:3.1.3000.7.1.7.1000-141] ... 11 more
... View more
07-19-2022
10:17 AM
STDOUT: Tue Jul 19 17:11:54 UTC 2022
JAVA_HOME=/usr/lib/jvm/jre-openjdk
using /usr/lib/jvm/jre-openjdk as JAVA_HOME
using 7 as CDH_VERSION
using /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hive as HIVE_HOME
using /var/run/cloudera-scm-agent/process/1546340518-hive-metastore-create-tables as HIVE_CONF_DIR
using /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hadoop as HADOOP_HOME
using /var/run/cloudera-scm-agent/process/1546340518-hive-metastore-create-tables/yarn-conf as HADOOP_CONF_DIR
using /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hbase as HBASE_HOME
using /var/run/cloudera-scm-agent/process/1546340518-hive-metastore-create-tables/hbase-conf as HBASE_CONF_DIR
CONF_DIR=/var/run/cloudera-scm-agent/process/1546340518-hive-metastore-create-tables
CMF_CONF_DIR=
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/hive_hive-HIVEMETASTORE-6fedb5c9be06aeeb0fe2895f5c730bb3_pid26789.hprof ...
Heap dump file created [47175301 bytes in 0.030 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/opt/cloudera/cm-agent/service/common/killparent.sh"
# Executing /bin/sh -c "/opt/cloudera/cm-agent/service/common/killparent.sh"... STDERR: WARNING: Use "yarn jar" to launch YARN applications.
++ ps -p 27031 -o comm=
+ mycmd=killparent.sh
+ TARGET=26789
++ ps -p 26789 -o comm=
+ PCMD=java
++ ps -p 26789 -o args=
+ PARGS='/usr/lib/jvm/jre-openjdk/bin/java -Dproc_jar -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Xms52428800 -Xmx52428800 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/hive_hive-HIVEMETASTORE-6fedb5c9be06aeeb0fe2895f5c730bb3_pid26789.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh -Dsun.security.krb5.disableReferrals=true -Djdk.tls.ephemeralDHKeySize=2048 -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hive/bin/../conf/parquet-logging.properties -Dyarn.log.dir=/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hadoop/logs -Dyarn.log.file=hadoop.log -Dyarn.home.dir=/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hadoop/libexec/../../hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hadoop/lib/native -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/lib/hive/lib/hive-cli-3.1.3000.7.1.7.1000-141.jar org.apache.hive.beeline.schematool.HiveSchemaTool -verbose -dbType postgres -initSchema -dbOpts postgres.filter.81,postgres.filter.pre.9'
+ '[' java == sh ']'
++ date --iso-8601=seconds
+ echo 2022-07-19T17:11:55+0000
+ kill -9 26789
... View more
07-19-2022
06:04 AM
Hi everyone, I'm trying to add Hive service in cluster but an error occurred. This is the log file: 2022-07-19 12:42:25,037 main ERROR Log4j2 ConfigurationScheduler attempted to increment scheduled items after start
Starting metastore schema initialization to 3.1.3000.7.1.7.1000-141
Initialization script hive-schema-3.1.3000.hive.sql
2022-07-19 12:42:25,310 main ERROR Log4j2 ConfigurationScheduler attempted to increment scheduled items after start How can i proceed to solve the problem? Thank you
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
07-18-2022
10:31 AM
Hello everyone you want to install the trial version of CDP private cloud on CentOS 7 machines on an external volume. Can I proceed using this guide? https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/installation/topics/cdpdc-trial-installation.html Or should I follow this? https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/installation/topics/cdpdc-manually-install-cloudera-software-packages.html Thank you
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
05-18-2022
06:15 AM
This is the entire log message in master node: 2022-05-18 15:06:11,046 INFO LDAP Login Monitor thread:com.cloudera.cmf.service.auth.AbstractExternalServerLoginMonitor: LDAP monitoring is disabled. 2022-05-18 15:06:11,048 INFO KDC Login Monitor thread:com.cloudera.cmf.service.auth.AbstractExternalServerLoginMonitor: KDC monitoring is disabled. 2022-05-18 15:06:11,971 INFO pool-6-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Cleaned up 2022-05-18 15:06:12,979 INFO pool-6-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up 2022-05-18 15:07:11,971 INFO pool-6-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (29 skipped) Cleaned up 2022-05-18 15:07:14,977 INFO pool-6-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up 2022-05-18 15:07:25,296 INFO scm-web-88:com.cloudera.server.web.cmf.CMFUserDetailsService: First user 'admin' logging in. 2022-05-18 15:07:25,380 INFO scm-web-88:com.cloudera.server.web.cmf.AuthenticationSuccessEventListener: Authentication success for user: 'admin' from 172.20.176.1 2022-05-18 15:07:26,462 INFO scm-web-85:com.cloudera.api.ApiExceptionMapper: Exception caught in API invocation. Msg:This installation currently has no license. java.util.NoSuchElementException: This installation currently has no license. at com.cloudera.api.dao.impl.LicenseManagerDaoImpl.readLicense(LicenseManagerDaoImpl.java:87) at com.cloudera.api.v1.impl.ClouderaManagerResourceImpl.readLicense(ClouderaManagerResourceImpl.java:57) at com.cloudera.api.v32.impl.ClouderaManagerResourceV32Impl.readLicense(ClouderaManagerResourceV32Impl.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179) at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:201) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:285) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:285) at com.cloudera.api.ApiInvoker.invoke(ApiInvoker.java:117) at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:104) at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:59) at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:96) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:234) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:208) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:160) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:225) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:298) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:222) at javax.servlet.http.HttpServlet.service(HttpServlet.java:645) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:273) at org.eclipse.jetty.servlet.ServletHolder$NotAsync.service(ServletHolder.java:1452) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:791) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) at com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter.doFilter(JavaMelodyFacade.java:204) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.cloudera.server.cmf.config.components.CmfHttpSessionFilter.doFilter(CmfHttpSessionFilter.java:35) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.cloudera.server.cmf.config.components.RequestRecastFilter.doFilter(RequestRecastFilter.java:55) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:115) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:169) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.kerberos.web.authentication.SpnegoAuthenticationProcessingFilter.doFilter(SpnegoAuthenticationProcessingFilter.java:128) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:158) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at com.cloudera.api.ApiBasicAuthFilter.doFilter(ApiBasicAuthFilter.java:86) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.session.web.http.SessionRepositoryFilter.doFilterInternal(SessionRepositoryFilter.java:171) at org.springframework.session.web.http.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:80) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:602) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1435) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1350) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at com.cloudera.server.common.BoundedQueuedThreadPool$2.run(BoundedQueuedThreadPool.java:94) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:773) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:905) at java.lang.Thread.run(Thread.java:748) 2022-05-18 15:07:43,821 INFO scm-web-89:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/license/trialBegin 2022-05-18 15:07:43,847 INFO scm-web-89:com.cloudera.cmf.crypto.LicenseLoaderImpl: The license state of the product is TRIAL 2022-05-18 15:07:43,848 INFO scm-web-89:com.cloudera.cmf.service.ServiceHandlerRegistry: Executing Global command ProcessStalenessCheckCommand BasicCmdArgs{args=[First reason why: Began trial]}. 2022-05-18 15:07:43,850 INFO scm-web-89:com.cloudera.cmf.command.flow.CmdStep: Executing command 1546333342 work: Execute 1 steps in sequence 2022-05-18 15:07:43,850 INFO scm-web-89:com.cloudera.cmf.command.flow.CmdStep: Executing command 1546333342 work: Configuration Staleness Check 2022-05-18 15:07:43,850 INFO scm-web-89:com.cloudera.cmf.service.ServiceHandlerRegistry: Global Command ProcessStalenessCheckCommand launched with id=1546333342 2022-05-18 15:07:43,858 INFO CommandPusher-1:com.cloudera.server.cmf.CommandPusherThread: Acquired lease lock on DbCommand:1546333342 2022-05-18 15:07:43,866 INFO scm-web-89:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/license/trialBegin, Status:200 2022-05-18 15:07:43,875 INFO ProcessStalenessDetector-0:com.cloudera.cmf.service.config.components.ProcessStalenessDetector: Queuing staleness check with FULL_CHECK for 0/0 roles. 2022-05-18 15:07:43,876 INFO ProcessStalenessDetector-0:com.cloudera.cmf.service.config.components.ProcessStalenessDetector: Staleness check done. Duration: PT0.002S 2022-05-18 15:07:43,876 INFO ProcessStalenessDetector-0:com.cloudera.cmf.service.config.components.ProcessStalenessDetector: Staleness check execution stats: average=0ms, min=0ms, max=0ms. 2022-05-18 15:07:43,880 INFO CommandPusher-1:com.cloudera.server.cmf.CommandPusherThread: Acquired lease lock on DbCommand:1546333342 2022-05-18 15:07:43,891 INFO CommandPusher-1:com.cloudera.cmf.model.DbCommand: Command 1546333342(ProcessStalenessCheckCommand) has completed. finalstate:FINISHED, success:true, msg:Successfully finished checking for configuration staleness. 2022-05-18 15:07:43,891 INFO CommandPusher-1:com.cloudera.cmf.command.components.CommandStorage: Invoked delete temp files for command:DbCommand{id=1546333342, name=ProcessStalenessCheckCommand} at dir:/var/lib/cloudera-scm-server/temp/commands/1546333342 2022-05-18 15:07:50,852 INFO scm-web-85:com.cloudera.server.web.cmf.RepoDiscovery: CentOS Linux release 7.9.2009 (Core) 2022-05-18 15:08:09,817 INFO scm-web-82:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/scanhosts.json 2022-05-18 15:08:09,825 INFO scm-web-82:com.cloudera.server.cmf.node.NodeScannerService: Request 0 contains 2 nodes 2022-05-18 15:08:09,833 INFO scm-web-82:com.cloudera.server.cmf.node.NodeScannerService: New node 172.20.178.216, scanning 2022-05-18 15:08:09,833 INFO scm-web-82:com.cloudera.server.cmf.node.NodeScannerService: New node 172.20.177.136, scanning 2022-05-18 15:08:09,834 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Beginning scan of node 172.20.178.216 and port 22 2022-05-18 15:08:09,834 INFO scm-web-82:com.cloudera.server.cmf.node.NodeScannerService: Finished submitting request 0 2022-05-18 15:08:09,834 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Canonical hostname is pse.slave1.clouderacluster 2022-05-18 15:08:09,834 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Connecting to remote host 2022-05-18 15:08:09,835 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Beginning scan of node 172.20.177.136 and port 22 2022-05-18 15:08:09,835 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Canonical hostname is pse.slave2.clouderacluster 2022-05-18 15:08:09,835 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Connecting to remote host 2022-05-18 15:08:09,836 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Disconnecting from remote host 2022-05-18 15:08:09,836 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Disconnecting from remote host 2022-05-18 15:08:09,836 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Connecting to ssh service on remote host 2022-05-18 15:08:09,836 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Connecting to ssh service on remote host 2022-05-18 15:08:09,840 INFO scm-web-82:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/scanhosts.json, Status:200 2022-05-18 15:08:09,873 INFO scm-web-84:com.cloudera.server.cmf.node.NodeScannerService: Request 0 returning 0/2 scans 2022-05-18 15:08:09,994 INFO NodeScannerThread-0:net.schmizz.sshj.common.SecurityUtils: BouncyCastle registration succeeded 2022-05-18 15:08:10,081 INFO NodeScannerThread-1:net.schmizz.sshj.transport.TransportImpl: Client identity string: SSH-2.0-SSHJ_0_14_0 2022-05-18 15:08:10,081 INFO NodeScannerThread-0:net.schmizz.sshj.transport.TransportImpl: Client identity string: SSH-2.0-SSHJ_0_14_0 2022-05-18 15:08:10,088 INFO NodeScannerThread-1:net.schmizz.sshj.transport.TransportImpl: Server identity string: SSH-2.0-OpenSSH_7.4 2022-05-18 15:08:10,088 INFO NodeScannerThread-0:net.schmizz.sshj.transport.TransportImpl: Server identity string: SSH-2.0-OpenSSH_7.4 2022-05-18 15:08:10,346 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Disconnecting from ssh service on remote host 2022-05-18 15:08:10,347 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Disconnecting from ssh service on remote host 2022-05-18 15:08:10,347 INFO NodeScannerThread-0:net.schmizz.sshj.transport.TransportImpl: Disconnected - BY_APPLICATION 2022-05-18 15:08:10,348 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScanner: Connected to SSH on node 172.20.178.216 with port 22 (latency PT0.001S) 2022-05-18 15:08:10,348 INFO NodeScannerThread-1:net.schmizz.sshj.transport.TransportImpl: Disconnected - BY_APPLICATION 2022-05-18 15:08:10,348 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScanner: Connected to SSH on node 172.20.177.136 with port 22 (latency PT0.001S) 2022-05-18 15:08:10,349 INFO NodeScannerThread-0:com.cloudera.server.cmf.node.NodeScannerService: Request 0 observed finished scan of node 172.20.178.216 2022-05-18 15:08:10,350 INFO NodeScannerThread-1:com.cloudera.server.cmf.node.NodeScannerService: Request 0 observed finished scan of node 172.20.177.136 2022-05-18 15:08:10,911 INFO scm-web-82:com.cloudera.server.cmf.node.NodeScannerService: Request 0 returning 2/2 scans 2022-05-18 15:08:13,648 INFO scm-web-82:com.cloudera.server.web.cmf.ParcelController: Synchronizing repos based on user request admin 2022-05-18 15:08:13,664 INFO ParcelUpdateService:com.cloudera.cmf.paywall.PaywallHelper: License incomplete; unable to use it for authentication UUID=72ef3466-2863-41bb-b89c-11126931ef69, name=Trial License 2022-05-18 15:08:13,983 INFO pool-6-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Cleaned up 2022-05-18 15:08:16,975 INFO pool-6-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: (30 skipped) Synced up 2022-05-18 15:08:38,690 INFO scm-web-84:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/install 2022-05-18 15:08:38,722 INFO scm-web-84:com.cloudera.server.cmf.node.NodeConfiguratorService: Creating request with id 0 2022-05-18 15:08:38,723 INFO scm-web-84:com.cloudera.cmf.service.ServiceHandlerRegistry: Executing Global command GlobalHostInstall GlobalHostInstallCommandArgs{sshPort=22, userName=root, password=REDACTED, passphrase=REDACTED, privateKey=REDACTED, parallelInstallCount=10, cmRepoUrl=http://centos.mirror.server24.net/7.9.2009/os/x86_64/ (9 more) https://archive.cloudera.com/cm7/7.4.4/redhat7/yum/ https://mirrors.xtom.de/epel/7/x86_64/ (53 more) http://centos.mirror.server24.net/7.9.2009/extras/x86_64/ (9 https://archive.cloudera.com/postgresql10/redhat7/ http://centos.mirror.server24.net/7.9.2009/updates/x86_64/ (9, gpgKeyCustomUrl=null, gpgKeyOverrideBundle=<none>, unlimitedJCE=false, javaInstallStrategy=AUTO, agentUserMode=ROOT, cdhVersion=-1, cdhRelease=NONE, cdhRepoUrl=null, buildCertCommand=, sslCertHostname=null, reqId=0, skipPackageInstall=false, skipCloudConfig=false, proxyProtocol=HTTP, proxyServer=null, proxyPort=0, proxyUserName=null, proxyPassword=REDACTED, cmca=<none>, hostCerts=<none>, customTrustStorePath=null, customTrustStorePassword=null, customTrustStoreType=jks, subjectAltNames=null, hosts=[pse.slave2.clouderacluster, pse.slave1.clouderacluster], existingHosts=[], agentReportedHostnames=null}. 2022-05-18 15:08:38,744 INFO scm-web-84:com.cloudera.cmf.command.flow.CmdStep: Executing command 1546333345 work: Execute 1 steps in sequence 2022-05-18 15:08:38,744 INFO scm-web-84:com.cloudera.cmf.command.flow.CmdStep: Executing command 1546333345 work: Install on 2 hosts. 2022-05-18 15:08:38,745 INFO scm-web-84:com.cloudera.cmf.command.flow.CmdStep: Executing command 1546333345 work: Install on pse.slave2.clouderacluster. 2022-05-18 15:08:38,775 INFO scm-web-84:com.cloudera.server.cmf.node.NodeConfiguratorService: Adding password-based configurator for pse.slave2.clouderacluster 2022-05-18 15:08:38,775 INFO scm-web-84:com.cloudera.server.cmf.node.NodeConfiguratorService: Submitted configurator for pse.slave2.clouderacluster with id 1 2022-05-18 15:08:38,775 INFO scm-web-84:com.cloudera.cmf.command.flow.CmdStep: Executing command 1546333345 work: Install on pse.slave1.clouderacluster. 2022-05-18 15:08:38,775 INFO scm-web-84:com.cloudera.server.cmf.node.NodeConfiguratorService: Adding password-based configurator for pse.slave1.clouderacluster 2022-05-18 15:08:38,776 INFO scm-web-84:com.cloudera.server.cmf.node.NodeConfiguratorService: Submitted configurator for pse.slave1.clouderacluster with id 2 2022-05-18 15:08:38,784 INFO NodeConfiguratorThread-0-1:com.cloudera.cmf.model.HostInstallArgs: Deprecated option for unlimited strength JCE. Value set to False. 2022-05-18 15:08:38,784 INFO NodeConfiguratorThread-0-0:com.cloudera.cmf.model.HostInstallArgs: Deprecated option for unlimited strength JCE. Value set to False. 2022-05-18 15:08:38,786 INFO scm-web-84:com.cloudera.cmf.service.ServiceHandlerRegistry: Global Command GlobalHostInstall launched with id=1546333345 2022-05-18 15:08:38,796 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave2.clouderacluster: Transitioning from INIT (PT0.022S) to CONNECT 2022-05-18 15:08:38,798 INFO NodeConfiguratorThread-0-0:net.schmizz.sshj.transport.TransportImpl: Client identity string: SSH-2.0-SSHJ_0_14_0 2022-05-18 15:08:38,797 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave1.clouderacluster: Transitioning from INIT (PT0.022S) to CONNECT 2022-05-18 15:08:38,799 INFO NodeConfiguratorThread-0-1:net.schmizz.sshj.transport.TransportImpl: Client identity string: SSH-2.0-SSHJ_0_14_0 2022-05-18 15:08:38,802 INFO scm-web-84:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/install, Status:200 2022-05-18 15:08:38,806 INFO CommandPusher-1:com.cloudera.server.cmf.CommandPusherThread: Acquired lease lock on DbCommand:1546333345 2022-05-18 15:08:38,808 INFO NodeConfiguratorThread-0-0:net.schmizz.sshj.transport.TransportImpl: Server identity string: SSH-2.0-OpenSSH_7.4 2022-05-18 15:08:38,818 INFO NodeConfiguratorThread-0-1:net.schmizz.sshj.transport.TransportImpl: Server identity string: SSH-2.0-OpenSSH_7.4 2022-05-18 15:08:38,827 INFO scm-web-85:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressinit.json 2022-05-18 15:08:38,827 INFO scm-web-84:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/express-wizard/updateHostsState 2022-05-18 15:08:38,843 INFO scm-web-85:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressinit.json, Status:200 2022-05-18 15:08:38,852 INFO scm-web-84:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/express-wizard/updateHostsState, Status:200 2022-05-18 15:08:38,863 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave2.clouderacluster: Transitioning from CONNECT (PT0.067S) to AUTHENTICATE 2022-05-18 15:08:38,878 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave1.clouderacluster: Transitioning from CONNECT (PT0.081S) to AUTHENTICATE 2022-05-18 15:08:38,943 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave2.clouderacluster: Transitioning from AUTHENTICATE (PT0.080S) to MAKE_TEMP_DIR 2022-05-18 15:08:38,946 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave1.clouderacluster: Transitioning from AUTHENTICATE (PT0.068S) to MAKE_TEMP_DIR 2022-05-18 15:08:38,980 INFO scm-web-82:com.cloudera.enterprise.JavaMelodyFacade: Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json 2022-05-18 15:08:39,000 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfigurator: Executing mktemp -d /tmp/scm_prepare_node.XXXXXXXX on pse.slave2.clouderacluster 2022-05-18 15:08:39,003 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfigurator: Executing mktemp -d /tmp/scm_prepare_node.XXXXXXXX on pse.slave1.clouderacluster 2022-05-18 15:08:39,017 INFO scm-web-82:com.cloudera.enterprise.JavaMelodyFacade: Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json, Status:200 2022-05-18 15:08:39,025 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave2.clouderacluster: Transitioning from MAKE_TEMP_DIR (PT0.082S) to COPY_FILES 2022-05-18 15:08:39,026 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave1.clouderacluster: Transitioning from MAKE_TEMP_DIR (PT0.080S) to COPY_FILES 2022-05-18 15:08:39,101 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfigurator: Using default key bundle URL 2022-05-18 15:08:39,140 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfigurator: Using default key bundle URL 2022-05-18 15:08:39,197 INFO NodeConfiguratorThread-0-1:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave1.clouderacluster: Setting COPY_FILES as failed and done state 2022-05-18 15:08:39,197 INFO NodeConfiguratorThread-0-1:net.schmizz.sshj.transport.TransportImpl: Disconnected - BY_APPLICATION 2022-05-18 15:08:39,200 INFO NodeConfiguratorThread-0-1:com.cloudera.cmf.model.HostInstallArgs: Deprecated option for unlimited strength JCE. Value set to False. 2022-05-18 15:08:39,236 INFO NodeConfiguratorThread-0-0:com.cloudera.server.cmf.node.NodeConfiguratorProgress: pse.slave2.clouderacluster: Setting COPY_FILES as failed and done state 2022-05-18 15:08:39,236 INFO NodeConfiguratorThread-0-0:net.schmizz.sshj.transport.TransportImpl: Disconnected - BY_APPLICATION 2022-05-18 15:08:39,237 INFO NodeConfiguratorThread-0-0:com.cloudera.cmf.model.HostInstallArgs: Deprecated option for unlimited strength JCE. Value set to False. 2022-05-18 15:08:43,843 INFO CommandPusher-1:com.cloudera.server.cmf.CommandPusherThread: Acquired lease lock on DbCommand:1546333345 2022-05-18 15:08:43,851 ERROR CommandPusher-1:com.cloudera.cmf.command.flow.WorkOutputs: CMD id: 1546333345 Failed to complete installation on host pse.slave2.clouderacluster. 2022-05-18 15:08:43,852 ERROR CommandPusher-1:com.cloudera.cmf.command.flow.WorkOutputs: CMD id: 1546333345 Failed to complete installation on host pse.slave1.clouderacluster. 2022-05-18 15:08:43,853 ERROR CommandPusher-1:com.cloudera.cmf.model.DbCommand: Command 1546333345(GlobalHostInstall) has completed. finalstate:FINISHED, success:false, msg:Failed to complete installation. I found some articles (listed below) about removal/modfication of some parcels but the result is the same. https://community.cloudera.com/t5/Support-Questions/Cloudera-Data-Center-7-trial-license-error/m-p/294812#M217436 https://community.cloudera.com/t5/Support-Questions/Cloudera-Data-Platform-7-1-1-Trial/td-p/297493 https://community.cloudera.com/t5/Support-Questions/Cloudera-Data-Platform-7-1-1-Trial/td-p/297493
... View more
05-12-2022
07:26 AM
Dears, I'm trying to install free trial version of CDP in my virtual machines. I followed all passages of requirements to configure my nodes. When i try to install Agent in slaves nodes, the interface return the message "Failed to copy installation files" for each node. I found some solutions for CDP 7.1.1 but solutions not resolve my problem. How can i proceed? Thanks.
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)