Member since
03-02-2021
25
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1034 | 03-16-2021 08:53 AM |
10-22-2021
12:25 AM
@PabitraDas The objective is to copy data between two distinct clusters
... View more
10-20-2021
11:36 PM
@Tylenol Thanks for sharing the information. what would be the ideal solution to copy the data between two CDP clusters.
... View more
10-15-2021
05:15 AM
If I am trying to access Zeppelin interpreter page , I keep getting the error "You don't have permission on this page" .
... View more
Labels:
10-07-2021
09:36 PM
Hi I am trying to configure Viewfs with HDFS Federation. I tried this with HDP 3 and I am trying to find the procedure it implement on CDP cluster. May someone please share the procedure to do the same. The objective it to share the data between tow CDP clusters. Thanks
... View more
Labels:
06-01-2021
03:02 AM
This is on a Kerberized HDP Cluster
... View more
- Tags:
- Kerberos
06-01-2021
01:44 AM
On a kerberized HDP cluster when I try to run dfsio it fails with below error ########################################## org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:563) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) Caused by: org.apache.hadoop.security.AccessControlException: User hdfs does not have permission to submit application_1622150236559_0013 to queue default at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:429) ... 12 more at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1498) at org.apache.hadoop.ipc.Client.call(Client.java:1444) at org.apache.hadoop.ipc.Client.call(Client.java:1354) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy12.submitApplication(Unknown Source) at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:289)
... View more
Labels:
- Labels:
-
HDFS
05-11-2021
11:48 PM
Hi I am facing the same issue , did you find the solution for the above problem
... View more
05-04-2021
10:34 PM
1 Kudo
Thanks @vidanimegh I have raised the case with Cloudera 🙂
... View more
05-04-2021
10:03 AM
Thanks @vidanimegh I dont see any rpms in the ranger-1.2.0.3.1.4.0-315-admin.tar.gz file . I think there is no option to use this with YUM for install are you suggesting that I use the install scripts. Below is the o/p after untaring the file. # ls -ls total 320 0 drwxr-xr-x 2 cecuser users 107 Aug 23 2019 bin 8 -r-xr--r-- 1 cecuser users 4288 Aug 23 2019 changepasswordutil.py 8 -r-xr--r-- 1 cecuser users 5126 Aug 23 2019 changeusernameutil.py 0 drwxrwxrwx 4 cecuser users 60 Aug 23 2019 contrib 0 drwxr-xr-x 3 root root 17 May 4 12:57 cred 0 drwxrwxrwx 7 cecuser users 85 Aug 23 2019 db 88 -r-xr--r-- 1 cecuser users 89153 Aug 23 2019 dba_script.py 64 -r-xr--r-- 1 cecuser users 64379 Aug 23 2019 db_setup.py 12 -r-xr--r-- 1 cecuser users 9876 Aug 23 2019 deleteUserGroupUtil.py 0 drwxrwxrwx 5 cecuser users 165 Aug 23 2019 ews 12 -rwx------ 1 cecuser users 8555 Aug 23 2019 install.properties 0 drwxr-xr-x 3 root root 17 May 4 12:57 jisql 4 -r-xr--r-- 1 cecuser users 3054 Aug 23 2019 ranger_credential_helper.py 20 -r-xr--r-- 1 cecuser users 17900 Aug 23 2019 restrict_permissions.py 8 -r-xr--r-- 1 cecuser users 6480 Aug 23 2019 rolebasedusersearchutil.py 4 -r-xr--r-- 1 cecuser users 3936 Aug 23 2019 set_globals.sh 4 -r-xr--r-- 1 cecuser users 2855 Aug 23 2019 setup_authentication.sh 60 -r-xr--r-- 1 cecuser users 59989 Aug 23 2019 setup.sh 0 drwxr-xr-x 2 root root 70 May 4 12:57 templates-upgrade 4 -r-xr--r-- 1 cecuser users 1920 Aug 23 2019 update_property.py 16 -r-xr--r-- 1 cecuser users 13281 Aug 23 2019 upgrade_admin.py 4 -r-xr--r-- 1 cecuser users 1247 Aug 23 2019 upgrade.sh 4 -r--r--r-- 1 cecuser users 17 Aug 23 2019 version
... View more
05-04-2021
06:55 AM
I do have access and I dont see Ranger admin there below is the o/p from repo Parent Directory ranger_3_1_4_0_315-hbase-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 43.36MB ranger_3_1_4_0_315-hdfs-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 56.41MB ranger_3_1_4_0_315-hive-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 41.71MB ranger_3_1_4_0_315-kafka-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 75.58MB ranger_3_1_4_0_315-kms-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 89.63MB ranger_3_1_4_0_315-knox-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 54.30MB ranger_3_1_4_0_315-solr-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 45.02MB ranger_3_1_4_0_315-storm-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 80.28MB ranger_3_1_4_0_315-tagsync-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 31.15MB ranger_3_1_4_0_315-usersync-1.2.0.3.1.4.0-315.ppc64le.rpm 2021-01-18 14:33 14.94MB ranger_3_1_4_0_315-yarn-plugin-1.2.0.3.1.4.0-315.ppc64le.rpm
... View more
05-04-2021
05:10 AM
Its an existing cluster where I am trying to add Ranger
... View more
05-04-2021
12:45 AM
Ranger Installation fails as ranger fileset is missing in the repo. I dont see any ranger-admin package in the repository https:/archive.cloudera.com/p/HDP/3.x/3.1.4.0/centos7-ppc/ranger/ raise RuntimeError(message) RuntimeError: Failed to execute command '/usr/bin/yum -y install ranger_3_1_4_0_315-admin', exited with code '1', message: 'https://archive.cloudera.com/p/HDP/3.x/3.1.4.0/centos7-ppc/ranger/ranger_3_1_4_0_315-admin-1.2.0.3.1.4.0-315.ppc64le.rpm: [Errno 14] Error 404 - The requested URL returned error: 404 Not Found Trying other mirror. To address this issue please refer to the below knowledge base article
... View more
Labels:
04-26-2021
07:27 AM
Getting below error post the integration with kerberos Error starting ResourceManager org.apache.hadoop.service.ServiceStateException: java.io.IOException: DestHost:destPort ces1pub.pbm.ihost.com:8020 , LocalHost:localPort cdp2pub.pbm.ihost.com/129.40.6.167:0. Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections. at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:203) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:866) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1269) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1310) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1306) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1306) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1357) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1547) Caused by: java.io.IOException: DestHost:destPort ces1pub.pbm.ihost.com:8020 , LocalHost:localPort cdp2pub.pbm.ihost.com/129.40.6.167:0. Failed on local exception: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566) at org.apache.hadoop.ipc.Client.call(Client.java:1508) at org.apache.hadoop.ipc.Client.call(Client.java:1405) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) at com.sun.proxy.$Proxy89.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:666) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362) at com.sun.proxy.$Proxy90.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2463) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2439) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1476) at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1473) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1490) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1465) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2374) at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$3.run(FileSystemRMStateStore.java:679) at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$3.run(FileSystemRMStateStore.java:676) at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$FSAction.runWithRetries(FileSystemRMStateStore.java:792) at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.mkdirsWithRetries(FileSystemRMStateStore.java:682) at org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.startInternal(FileSystemRMStateStore.java:160) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.serviceStart(RMStateStore.java:824) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) ... 12 more Caused by: java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections. at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:851) at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636) at org.apache.hadoop.ipc.Client.call
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
-
Kerberos
04-22-2021
06:55 AM
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh -H -E /usr/hdp/3.1.4.0-315/hadoop/bin/hdfs --config /usr/hdp/3.1.4.0-315/hadoop/conf --daemon start datanode' returned 1. ERROR: Cannot set priority of datanode process 45359
stdout:
2021-04-22 03:25:38,875 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315
2021-04-22 03:25:38,931 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf
2021-04-22 03:25:39,273 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.4.0-315 -> 3.1.4.0-315
2021-04-22 03:25:39,289 - Using hadoop conf dir: /usr/hdp/3.1.4.0-315/hadoop/conf
2021-04-22 03:25:39,292 - Group['hdfs'] {}
2021-04-22 03:25:39,294 - Group['hadoop'] {}
2021-04-22 03:25:39,295 - Group['users'] {}
2021-04-22 03:25:39,296 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2021-04-22 03:25:39,297 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
... View more
Labels:
- Labels:
-
Kerberos
03-18-2021
03:23 AM
One more thing I only see livy , md and angular as the interpreter option , why am I not seeing python and sh options as well.
... View more
03-18-2021
03:21 AM
@Scharan the livy url is : http://localhost:8998 When I tired to access the webui It was not responding. So I restarted the live server , [root@cdp1pub spark]# export SPARK_HOME=/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/spark/ [root@cdp1pub hadoop]# export HADOOP_CONF_DIR=/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/etc/hadoop [root@cdp1pub hadoop]# /opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/livy2/bin/livy-server start starting java -cp /opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/livy2/jars/*:/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/livy2/conf:/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/etc/hadoop: org.apache.livy.server.LivyServer, logging to /opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/livy2/logs/livy-root-server.out [root@cdp1pub hadoop]# /opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/livy2/bin/livy-server status livy-server is running (pid: 24354) [root@cdp1pub hadoop]# Now I am able to login and %pyspark is not throwing any error Thanks a lot for help
... View more
03-17-2021
09:16 AM
If I try to check the interpreter settings on Zeppelin UI I am getting below error , which shows that I don't have the required permissions I have logged in as admin user.
... View more
03-17-2021
03:45 AM
%pyspark java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:607) at java.net.Socket.connect(Socket.java:556) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) at sun.net.www.http.HttpClient.New(HttpClient.java:339) at sun.net.www.http.HttpClient.New(HttpClient.java:357) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1226) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:990) at org.springframework.http.client.SimpleBufferingClientHttpRequest.executeInternal(SimpleBufferingClientHttpRequest.java:78) at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53) at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:661) at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:622) at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:540) at org.apache.zeppelin.livy.BaseLivyInterpreter.callRestAPI(BaseLivyInterpreter.java:706) at org.apache.zeppelin.livy.BaseLivyInterpreter.callRestAPI(BaseLivyInterpreter.java:686) at org.apache.zeppelin.livy.BaseLivyInterpreter.getLivyVersion(BaseLivyInterpreter.java:472) at org.apache.zeppelin.livy.BaseLivyInterpreter.open(BaseLivyInterpreter.java:161) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... View more
03-16-2021
11:48 PM
I am able to login however , if I click on the interpreter option I get the error that I don't have the permission.
... View more
03-16-2021
03:42 AM
I have installed Zeppelin on my CDP cluster , since by default "anonymous" id is disabled what is the user ID that I can use to login to Zeppelin UI. I tried using : admin = password1
user1 = password2
user2 = password3 But none of them seems to work
... View more
03-12-2021
01:50 AM
1 Kudo
I modified the permission to 777 for /user After which I am able to install Spark and able to install history server. sudo -u hdfs hadoop fs -chmod 777 /user
... View more
03-10-2021
05:59 PM
Permisson seems to be okay [root@cdp1pub ~]# hdfs dfs -ls /user/ Found 4 items drwx------ - hdfs supergroup 0 2021-03-02 00:42 /user/hdfs drwxrwx--- - mapred supergroup 0 2021-03-01 09:10 /user/history drwxr-x--x - spark spark 0 2021-03-10 05:26 /user/spark drwxr-xr-x - yarn supergroup 0 2021-03-02 00:37 /user/yarn [root@cdp1pub ~]# hdfs dfs -ls /user/spark Found 2 items drwxrwxrwt - spark spark 0 2021-03-10 05:26 /user/spark/applicationHistory drwxrwxrwt - spark spark 0 2021-03-10 05:26 /user/spark/driverLogs [root@cdp1pub ~]# Tried to restart history server it still fails ################### Logs Wed Mar 10 20:57:31 EST 2021
JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera
Using -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid7765.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh as CSD_JAVA_OPTS
Using /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER as conf dir
Using scripts/control.sh as process script
CONF_DIR=/var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER
CMF_CONF_DIR=
Wed Mar 10 20:57:31 EST 2021: Running Spark CSD control script...
Wed Mar 10 20:57:31 EST 2021: Detected CDH_VERSION of [7]
Wed Mar 10 20:57:31 EST 2021: Starting Spark History Server
Running [/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/spark/bin/spark-class org.apache.spark.deploy.history.HistoryServer --properties-file /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/spark-conf/spark-history-server.conf]
Wed Mar 10 20:57:38 EST 2021
JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera
Using -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid8080.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh as CSD_JAVA_OPTS
Using /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER as conf dir
Using scripts/control.sh as process script
CONF_DIR=/var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER
CMF_CONF_DIR=
Wed Mar 10 20:57:39 EST 2021: Running Spark CSD control script...
Wed Mar 10 20:57:39 EST 2021: Detected CDH_VERSION of [7]
Wed Mar 10 20:57:39 EST 2021: Starting Spark History Server
Running [/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/spark/bin/spark-class org.apache.spark.deploy.history.HistoryServer --properties-file /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/spark-conf/spark-history-server.conf]
Wed Mar 10 20:57:48 EST 2021
JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera
Using -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid8375.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh as CSD_JAVA_OPTS
Using /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER as conf dir
Using scripts/control.sh as process script
CONF_DIR=/var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER
CMF_CONF_DIR=
Wed Mar 10 20:57:48 EST 2021: Running Spark CSD control script...
Wed Mar 10 20:57:48 EST 2021: Detected CDH_VERSION of [7]
Wed Mar 10 20:57:48 EST 2021: Starting Spark History Server
Running [/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/spark/bin/spark-class org.apache.spark.deploy.history.HistoryServer --properties-file /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/spark-conf/spark-history-server.conf]
Wed Mar 10 20:57:57 EST 2021
JAVA_HOME=/usr/java/jdk1.8.0_232-cloudera
Using -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid8757.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh as CSD_JAVA_OPTS
Using /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER as conf dir
Using scripts/control.sh as process script
CONF_DIR=/var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER
CMF_CONF_DIR=
Wed Mar 10 20:57:57 EST 2021: Running Spark CSD control script...
Wed Mar 10 20:57:57 EST 2021: Detected CDH_VERSION of [7]
Wed Mar 10 20:57:57 EST 2021: Starting Spark History Server
Running [/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/lib/spark/bin/spark-class org.apache.spark.deploy.history.HistoryServer --properties-file /var/run/cloudera-scm-agent/process/1546336662-spark_on_yarn-SPARK_YARN_HISTORY_SERVER/spark-conf/spark-history-server.conf]
... View more
03-10-2021
03:08 AM
+ BASIC_GC_TUNING_ARGS=
+ case $JAVA_MAJOR in
+ BASIC_GC_TUNING_ARGS=' '
+ CSD_GC_ARGS=' '
+ CSD_JAVA_OPTS+=' '
++ replace_pid -XX:+HeapDumpOnOutOfMemoryError '-XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid{{PID}}.hprof' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh
++ echo -XX:+HeapDumpOnOutOfMemoryError '-XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid{{PID}}.hprof' -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh
++ sed 's#{{PID}}#29430#g'
+ export 'CSD_JAVA_OPTS=-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid29430.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh'
+ CSD_JAVA_OPTS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid29430.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh'
+ echo 'Using -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/spark_on_yarn_spar40365358-SPARK_YARN_HISTORY_SERVER-42af0f75a56c8c9b8b467a684_pid29430.hprof -XX:OnOutOfMemoryError=/opt/cloudera/cm-agent/service/common/killparent.sh as CSD_JAVA_OPTS'
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p0.6300266/meta/cdh_env.sh ']'
+ OLD_IFS='
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
... View more
Labels: