Member since
02-27-2023
37
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 9720 | 05-09-2023 03:20 AM | |
| 5182 | 05-09-2023 03:16 AM | |
| 3833 | 03-30-2023 10:41 PM | |
| 26086 | 03-30-2023 07:25 PM |
03-28-2023
11:59 PM
Hi all, I am exploring the features in my CDP cluster. I added Spark service to the cluster, when I try to study Spark and run pyspark in terminal, I got the following error: Type "help", "copyright", "credits" or "license" for more information. Warning: Ignoring non-Spark config property: hdfs Warning: Ignoring non-Spark config property: ExitCodeException Warning: Ignoring non-Spark config property: at Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/03/29 02:47:40 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist 23/03/29 02:47:43 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist 23/03/29 02:47:49 ERROR spark.SparkContext: Error initializing SparkContext. java.io.FileNotFoundException: File file:/home/asl/2023-03-28 23:17:30,775 WARN [TGT Renewer for asl@MY.CLOUDERA.LAB] security.UserGroupInformation (UserGroupInformation.java:run(1026)) - Exception encountered while running the renewal command for asl@MY.CLOUDERA.LAB. (TGT end time:1680069424000, renewalFailures: 0, renewalFailuresTotal: 1) does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:755) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1044) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:745) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456) at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:76) at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:220) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:84) at org.apache.spark.SparkContext.<init>(SparkContext.scala:536) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) 23/03/29 02:47:49 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 23/03/29 02:47:49 WARN spark.SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:423) py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) py4j.Gateway.invoke(Gateway.java:238) py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) py4j.GatewayConnection.run(GatewayConnection.java:238) java.lang.Thread.run(Thread.java:748) 23/03/29 02:47:49 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist 23/03/29 02:47:54 ERROR spark.SparkContext: Error initializing SparkContext. java.io.FileNotFoundException: File file:/home/asl/2023-03-28 23:17:30,775 WARN [TGT Renewer for asl@MY.CLOUDERA.LAB] security.UserGroupInformation (UserGroupInformation.java:run(1026)) - Exception encountered while running the renewal command for asl@MY.CLOUDERA.LAB. (TGT end time:1680069424000, renewalFailures: 0, renewalFailuresTotal: 1) does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:755) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1044) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:745) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456) at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:76) at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:220) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:84) at org.apache.spark.SparkContext.<init>(SparkContext.scala:536) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) 23/03/29 02:47:54 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! /opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session. warnings.warn("Failed to initialize Spark session.") Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/shell.py", line 41, in <module> spark = SparkSession._create_shell_session() File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/sql/session.py", line 583, in _create_shell_session return SparkSession.builder.getOrCreate() File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/sql/session.py", line 173, in getOrCreate sc = SparkContext.getOrCreate(sparkConf) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 369, in getOrCreate SparkContext(conf=conf or SparkConf()) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 136, in __init__ conf, jsc, profiler_cls) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 198, in _do_init self._jsc = jsc or self._initialize_context(self._conf._jconf) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 308, in _initialize_context return self._jvm.JavaSparkContext(jconf) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__ answer, self._gateway_client, None, self._fqn) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext. : java.io.FileNotFoundException: File file:/home/asl/2023-03-28 23:17:30,775 WARN [TGT Renewer for asl@MY.CLOUDERA.LAB] security.UserGroupInformation (UserGroupInformation.java:run(1026)) - Exception encountered while running the renewal command for asl@MY.CLOUDERA.LAB. (TGT end time:1680069424000, renewalFailures: 0, renewalFailuresTotal: 1) does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:755) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1044) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:745) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456) at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:76) at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:220) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:84) at org.apache.spark.SparkContext.<init>(SparkContext.scala:536) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) I can't figure out the cause of this issue. Please kindly help me out of this. Thank you.
... View more
Labels:
- Labels:
-
Apache Spark
03-23-2023
08:20 PM
@Shelton Thank you for your support. I am using PostgreSQL as the back end database. Here is my database server configuration file. Anyone from anywhere should able to connect to the database. And I confirm that the database user can access hue database as below. After I turn on the debug mode for Hue, I get the following error message The error is about failure in inserting records to the hue database table. Therefore I tried manually insert the record and I can successfully do the insert. After refresh the tab in browser, no error pops out but I still can't access to the page. Here is the rungunicornserver.log message Include some configuration settings about database on Cloudera console Besides, I have turned on Kerberos and the host that I tried to access the Hue UI is not under the Kerberos realm. But the host and my cluster are in the same network, they can ping each others. Please let me know if I need to provide any additional information. Thank you very much.
... View more
03-22-2023
08:24 PM
Here is more information from rungunicornserver.log Please help me out. Thanks a lot.
... View more
03-22-2023
07:44 PM
Here is more information, from the rungunicornserver.log [22/Mar/2023 19:39:50 -0700] middleware INFO Processing exception: syntax error at or near "ON" LINE 1: ...oups" ("user_id", "group_id") VALUES (1100713, 1) ON CONFLIC... ^ : Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 84, in _execute return self.cursor.execute(sql, params) psycopg2.errors.SyntaxError: syntax error at or near "ON" LINE 1: ...oups" ("user_id", "group_id") VALUES (1100713, 1) ON CONFLIC... ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/core/handl ers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.8/contextlib.py", line 75, in inner return func(*args, **kwds) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/ext-py3/django-axes-5.13.0/axes/decorators .py", line 11, in inner return func(request, *args, **kwargs) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/src/desktop/auth/views.py", line 110, in d t_login is_first_login_ever = first_login_ever() File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/src/desktop/auth/views.py", line 91, in fi rst_login_ever if hasattr(backend, 'is_first_login_ever') and backend.is_first_login_ever(): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/src/desktop/auth/backend.py", line 322, in is_first_login_ever return User.objects.exclude(id=install_sample_user().id).count() == 0 File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/apps/useradmin/src/useradmin/models.py", line 371, in i nstall_sample_user user.groups.add(default_group) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ fields/related_descriptors.py", line 950, in add self._add_items( File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ fields/related_descriptors.py", line 1130, in _add_items self.through._default_manager.using(db).bulk_create([ File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ query.py", line 514, in bulk_create returned_columns = self._batched_insert( File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ query.py", line 1293, in _batched_insert self._insert(item, fields=fields, using=self.db, ignore_conflicts=ignore_conflicts) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ query.py", line 1270, in _insert return query.get_compiler(using=using).execute_sql(returning_fields) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ sql/compiler.py", line 1416, in execute_sql cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 66, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 75, in _execute_with_wrappers return executor(sql, params, many, context) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/utils.p y", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 84, in _execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: syntax error at or near "ON" LINE 1: ...oups" ("user_id", "group_id") VALUES (1100713, 1) ON CONFLIC...
... View more
03-22-2023
02:41 AM
Hi all, I installed Hue on my CDP 7.1.8 cluster recently but not able to access the UI. Here are the screencap when trying to access the UI: 1. 2. I have tried a solution found in the Cloudera Community and checked the Bind Hue Server to Wildcard Address option. However, the problem doesn't solved. Please help me out with this issue. Thanks in advanced.
... View more
Labels:
03-20-2023
11:03 PM
1 Kudo
@pkr , Thanks a lot, seems the system can reach the link you provided. Seems the documentation is a bit misleading? As I directly copy what shows on the web page.
... View more
03-20-2023
09:18 PM
Hi all, I have installed CDP 7.1.8 runtime and CM 7.7.1. When I would like to download the parcel for CFM, I got the error shown below On my VM, I can download the parcel using wget, so I don't know why it shows 404 not found. Please kindly help me out, thanks in advance.
... View more
Labels:
03-16-2023
12:17 AM
The problem is solved. It is caused by file permission. Solved this problem by executing the follow commands: chmod 700 /home/user/.ssh chmod 600 /home/user/.ssh/authorized_keys
... View more
03-15-2023
11:53 PM
I am setting up a stand-alone CDP cluster. However, getting errors when doing the agents installation steps. I can SSH to this host from another instance using both password and private key. Therefore, I can't figure out the reason. Here are the error messages Please let me know if I need to provide more information. Thanks in advance!
... View more
Labels:
- Labels:
-
Cloudera
-
Cloudera on premises
- « Previous
- Next »