Member since
02-27-2023
37
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4993 | 05-09-2023 03:20 AM | |
2535 | 05-09-2023 03:16 AM | |
2381 | 03-30-2023 10:41 PM | |
17919 | 03-30-2023 07:25 PM |
03-29-2023
08:58 PM
@nikhilm Thank you for your reply. May I know what should I input for nameserviceXYZ? Please give some example for me if possible.
... View more
03-29-2023
08:52 PM
Hi all, I am practicing the feature in CDSW. When trying to do an experiment, I got an error in the session as below: -------error------- import pickle import cdsw model = pickle.load(open('model.pkl', 'rb')) ModuleNotFoundError: No module named 'sklearn' ModuleNotFoundError Traceback (most recent call last) /tmp/ipykernel_115/2912545845.py in <module> ----> 1 model = pickle.load(open('model.pkl', 'rb')) ModuleNotFoundError: No module named 'sklearn' Engine exited with status 1. -------error------- However, in the Build session in Experiment, I see Sklearn is installed in the docker container. -----build-log----- Step 1/5 : FROM docker.repository.cloudera.com/cloudera/cdsw/ml-runtime-workbench-python3.8-standard:2022.11.2-b2 ---> 7ffae291c607 Step 2/5 : WORKDIR /home/cdsw ---> 22e8b9772338 Removing intermediate container 404b7da93746 Step 3/5 : COPY sources /home/cdsw ---> 90d2c92ea316 Removing intermediate container 959005050d3c Step 4/5 : RUN su cdsw -c "mkdir -p ${R_LIBS_USER:-/home/cdsw/R}" && chown -R cdsw:cdsw /home/cdsw && printf "%s\n" 'export ALL_PROXY="" HTTPS_PROXY="" HTTP_PROXY="" MAX_TEXT_LENGTH="9999999" NO_PROXY="" PYTHONPATH="/usr/local/lib/python2.7/site-packages:/usr/local/lib/python3.6/site-packages:/usr/local/lib/anaconda_python3/site-packages" all_proxy="" http_proxy="" https_proxy="" no_proxy="" && /bin/bash --login -c "${1}"' > /tmp/.buildenv && chmod u+x /tmp/.buildenv && chown cdsw:cdsw /tmp/.buildenv && chmod u+x "/home/cdsw/cdsw-build.sh" && su cdsw -c "PATH=${PATH} /tmp/.buildenv /home/cdsw/cdsw-build.sh" && : ---> Running in 0c052f60686b Collecting sklearn Downloading sklearn-0.0.post1.tar.gz (3.6 kB) Building wheels for collected packages: sklearn Building wheel for sklearn (setup.py): started Building wheel for sklearn (setup.py): finished with status 'done' Created wheel for sklearn: filename=sklearn-0.0.post1-py3-none-any.whl size=2343 sha256=8dbc420f70aee919b1d2719ee53b89f5822502f59707d87c14362581489fdccb Stored in directory: /home/cdsw/.cache/pip/wheels/14/25/f7/1cc0956978ae479e75140219088deb7a36f60459df242b1a72 Successfully built sklearn Installing collected packages: sklearn Successfully installed sklearn-0.0.post1 ---> 04c71dc6db3d Removing intermediate container 0c052f60686b Step 5/5 : CMD /bin/bash ---> Running in 561a448372ae ---> 4dcc7434f811 Removing intermediate container 561a448372ae Successfully built 4dcc7434f811 Start Pushing image to [100.77.0.117:5000/4fcc178a-6e6e-4ed1-bd5b-7743775f9a5a] Finish Pushing image to [100.77.0.117:5000/4fcc178a-6e6e-4ed1-bd5b-7743775f9a5a] -----build-log----- Therefore, I have no idea why it fails. Could someone point out the reason for me please? Thank you.
... View more
Labels:
03-28-2023
11:59 PM
Hi all, I am exploring the features in my CDP cluster. I added Spark service to the cluster, when I try to study Spark and run pyspark in terminal, I got the following error: Type "help", "copyright", "credits" or "license" for more information. Warning: Ignoring non-Spark config property: hdfs Warning: Ignoring non-Spark config property: ExitCodeException Warning: Ignoring non-Spark config property: at Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/03/29 02:47:40 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist 23/03/29 02:47:43 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist 23/03/29 02:47:49 ERROR spark.SparkContext: Error initializing SparkContext. java.io.FileNotFoundException: File file:/home/asl/2023-03-28 23:17:30,775 WARN [TGT Renewer for asl@MY.CLOUDERA.LAB] security.UserGroupInformation (UserGroupInformation.java:run(1026)) - Exception encountered while running the renewal command for asl@MY.CLOUDERA.LAB. (TGT end time:1680069424000, renewalFailures: 0, renewalFailuresTotal: 1) does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:755) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1044) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:745) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456) at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:76) at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:220) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:84) at org.apache.spark.SparkContext.<init>(SparkContext.scala:536) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) 23/03/29 02:47:49 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 23/03/29 02:47:49 WARN spark.SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:423) py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) py4j.Gateway.invoke(Gateway.java:238) py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) py4j.GatewayConnection.run(GatewayConnection.java:238) java.lang.Thread.run(Thread.java:748) 23/03/29 02:47:49 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist 23/03/29 02:47:54 ERROR spark.SparkContext: Error initializing SparkContext. java.io.FileNotFoundException: File file:/home/asl/2023-03-28 23:17:30,775 WARN [TGT Renewer for asl@MY.CLOUDERA.LAB] security.UserGroupInformation (UserGroupInformation.java:run(1026)) - Exception encountered while running the renewal command for asl@MY.CLOUDERA.LAB. (TGT end time:1680069424000, renewalFailures: 0, renewalFailuresTotal: 1) does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:755) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1044) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:745) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456) at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:76) at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:220) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:84) at org.apache.spark.SparkContext.<init>(SparkContext.scala:536) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) 23/03/29 02:47:54 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! /opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/shell.py:45: UserWarning: Failed to initialize Spark session. warnings.warn("Failed to initialize Spark session.") Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/shell.py", line 41, in <module> spark = SparkSession._create_shell_session() File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/sql/session.py", line 583, in _create_shell_session return SparkSession.builder.getOrCreate() File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/sql/session.py", line 173, in getOrCreate sc = SparkContext.getOrCreate(sparkConf) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 369, in getOrCreate SparkContext(conf=conf or SparkConf()) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 136, in __init__ conf, jsc, profiler_cls) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 198, in _do_init self._jsc = jsc or self._initialize_context(self._conf._jconf) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/pyspark/context.py", line 308, in _initialize_context return self._jvm.JavaSparkContext(jconf) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__ answer, self._gateway_client, None, self._fqn) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext. : java.io.FileNotFoundException: File file:/home/asl/2023-03-28 23:17:30,775 WARN [TGT Renewer for asl@MY.CLOUDERA.LAB] security.UserGroupInformation (UserGroupInformation.java:run(1026)) - Exception encountered while running the renewal command for asl@MY.CLOUDERA.LAB. (TGT end time:1680069424000, renewalFailures: 0, renewalFailuresTotal: 1) does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:755) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1044) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:745) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:456) at org.apache.spark.deploy.history.EventLogFileWriter.requireLogBaseDirAsDirectory(EventLogFileWriters.scala:76) at org.apache.spark.deploy.history.SingleEventLogFileWriter.start(EventLogFileWriters.scala:220) at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:84) at org.apache.spark.SparkContext.<init>(SparkContext.scala:536) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:238) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) I can't figure out the cause of this issue. Please kindly help me out of this. Thank you.
... View more
Labels:
- Labels:
-
Apache Spark
03-25-2023
09:49 PM
Thank you for your reply @paras. The problem is fixed and it is caused by some conflicts related to Kerberos. I enabled Kerberos for the first time I create the cluster but something went wrong, so I delete the cluster and create a new one. This time, I didn't enable Kerberos. However, maybe some settings previously related to Kerberos already stayed in my machine. Therefore, it causes conflict. May I know how to hard reset for the entire cluster setting and hard reset for a particular service? As I probably will reuse the instance and build a new cluster. Thank you in advance.
... View more
03-23-2023
08:20 PM
@Shelton Thank you for your support. I am using PostgreSQL as the back end database. Here is my database server configuration file. Anyone from anywhere should able to connect to the database. And I confirm that the database user can access hue database as below. After I turn on the debug mode for Hue, I get the following error message The error is about failure in inserting records to the hue database table. Therefore I tried manually insert the record and I can successfully do the insert. After refresh the tab in browser, no error pops out but I still can't access to the page. Here is the rungunicornserver.log message Include some configuration settings about database on Cloudera console Besides, I have turned on Kerberos and the host that I tried to access the Hue UI is not under the Kerberos realm. But the host and my cluster are in the same network, they can ping each others. Please let me know if I need to provide any additional information. Thank you very much.
... View more
03-22-2023
08:24 PM
Here is more information from rungunicornserver.log Please help me out. Thanks a lot.
... View more
03-22-2023
07:44 PM
Here is more information, from the rungunicornserver.log [22/Mar/2023 19:39:50 -0700] middleware INFO Processing exception: syntax error at or near "ON" LINE 1: ...oups" ("user_id", "group_id") VALUES (1100713, 1) ON CONFLIC... ^ : Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 84, in _execute return self.cursor.execute(sql, params) psycopg2.errors.SyntaxError: syntax error at or near "ON" LINE 1: ...oups" ("user_id", "group_id") VALUES (1100713, 1) ON CONFLIC... ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/core/handl ers/base.py", line 181, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/lib/python3.8/contextlib.py", line 75, in inner return func(*args, **kwds) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/ext-py3/django-axes-5.13.0/axes/decorators .py", line 11, in inner return func(request, *args, **kwargs) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/src/desktop/auth/views.py", line 110, in d t_login is_first_login_ever = first_login_ever() File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/src/desktop/auth/views.py", line 91, in fi rst_login_ever if hasattr(backend, 'is_first_login_ever') and backend.is_first_login_ever(): File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/desktop/core/src/desktop/auth/backend.py", line 322, in is_first_login_ever return User.objects.exclude(id=install_sample_user().id).count() == 0 File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/apps/useradmin/src/useradmin/models.py", line 371, in i nstall_sample_user user.groups.add(default_group) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ fields/related_descriptors.py", line 950, in add self._add_items( File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ fields/related_descriptors.py", line 1130, in _add_items self.through._default_manager.using(db).bulk_create([ File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ query.py", line 514, in bulk_create returned_columns = self._batched_insert( File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ query.py", line 1293, in _batched_insert self._insert(item, fields=fields, using=self.db, ignore_conflicts=ignore_conflicts) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ query.py", line 1270, in _insert return query.get_compiler(using=using).execute_sql(returning_fields) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/models/ sql/compiler.py", line 1416, in execute_sql cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 66, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 75, in _execute_with_wrappers return executor(sql, params, many, context) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/utils.p y", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p0.30990532/lib/hue/build/env/lib/python3.8/site-packages/django/db/backend s/utils.py", line 84, in _execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: syntax error at or near "ON" LINE 1: ...oups" ("user_id", "group_id") VALUES (1100713, 1) ON CONFLIC...
... View more
03-22-2023
02:41 AM
Hi all, I installed Hue on my CDP 7.1.8 cluster recently but not able to access the UI. Here are the screencap when trying to access the UI: 1. 2. I have tried a solution found in the Cloudera Community and checked the Bind Hue Server to Wildcard Address option. However, the problem doesn't solved. Please help me out with this issue. Thanks in advanced.
... View more
Labels:
03-20-2023
11:03 PM
1 Kudo
@pkr , Thanks a lot, seems the system can reach the link you provided. Seems the documentation is a bit misleading? As I directly copy what shows on the web page.
... View more
- « Previous
- Next »