Member since
11-04-2015
261
Posts
44
Kudos Received
33
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 9122 | 05-16-2024 03:10 AM | |
| 4205 | 01-17-2024 01:07 AM | |
| 3640 | 12-11-2023 02:10 AM | |
| 7051 | 10-11-2023 08:42 AM | |
| 4080 | 09-07-2023 01:08 AM |
04-14-2022
09:12 AM
I see. Have you verified that the built jar contains this package structure and class names? Can you also show where the jar is uploaded and how is it referenced in the oozie workflow? Thanks, Miklos
... View more
04-14-2022
07:42 AM
Hi, I'm doing well, thank you, hope you're good too. That property usually points to a relative path - which exists in the process directory: KRB5CCNAME='krb5cc_cldr' if that's not the case, I would look into whether the root user's (or maybe the "cloudera-scm" user's) .bashrc file has overridden that KRB5CCNAME environment variable by any chance.
... View more
04-14-2022
01:45 AM
Hi @yagoaparecidoti , in general, the "supervisor.conf" in the process directory (actually the whole process directory) is prepared by Cloudera Manager (server) before starting a process (CM server sends the whole package of information including config files to the CM agent which extracts it in a new process directory). The supervisor.conf file contains all the environment and command related information which is needed for the Supervisor daemon to start the process. There might be some default values taken from the cluster or from the service type. Do you have some specific questions about it?
... View more
04-13-2022
02:36 AM
1 Kudo
Hi @Seaport , the "RegexSerDe" is in the contrib package, which is not supported officially, and as such you can use it in some parts of the platform but the different components may not give you full support for that. I would recommend you to preprocess the datafiles to have a commonly consumable format (CSV) before ingesting them into the cluster. Alternatively you can ingest it into a table which has only a single (string) column, and then do the processing/validation/formatting/transforming of the data with inserting it into a proper final table with the columns you need. During the insert you can still use "regex" or "substring" type of functions / UDFs to extract the fields you need from the fixed-width datafiles (from the table with a single column). I hope this helps, Best regards, Miklos
... View more
04-13-2022
02:03 AM
Hi @jarededrake , The "ClassNotFoundException: Class Hortonwork.SparkTutorial.Main not found" suggests that in the Java program's main class package name might have a typo (in your workflow definiton), the Hortonwork should be Hortonworks. Can you check that?
... View more
03-30-2022
04:30 AM
Hello @Jared , The "ClassNotFoundException" means the JVM responsible for running the code has not found one of the required Java classes - which the code relies on. It's great that you have added those jars to your IntelliJ development environment, however that does not mean it will be available during runtime. One way would be to package all the dependencies in your jar, creating a so called "fat jar", however that is not recommended as with that your application will not be able to benefit from the future bugfixes which will be deployed in the cluster as the cluster is upgraded/patched. That would also have a risk that your application will fail after upgrades due to different class conflicts. The best way is to set up the running environment to have the needed classes. Hue / Java editor actually creates a one-time Oozie workflow with a single Java action in it, however it does not really give you flexibilty around customizing all the parts of this workflow and the running environment including what other jars you need to be shipped with the code. Since your code relies on SparkConf I assume it is actually a Spark based application. It would be a better option to create an Oozie workflow (you can also start from Hue > Apps > Scheduler > change Documents dropdown to Actions) with a Spark action. That will set up all the classpath needed for running Spark apps. That way you do not need to reference any Spark related jars, just the jar with your custom code. Hope this helps. Best regards Miklos
... View more
03-28-2022
02:17 AM
Hello @Sayed016 , In general the java.io.IOException: Filesystem closed message happens when the same or a different thread in the same JVM called the "FileSystem.close()" (see JavaDoc) method - and something later tries to access the HDFS filesystem. (in this case the "EventLoggingListener.stop()" tries to access the HDFS to flush the Spark event logs to HDFS) FileSystem.close() should not be called by any custom code, as there is a single shared instance of the FileSystem object in any given JVM instance and it can cause failures for the still running frameworks like Spark. This suggests that the Spark application has the above FileSystem.close() call somewhere in the code. Please review the code and remove those. Hope that helps. Best regards, Miklos
... View more
03-25-2022
01:27 AM
Hi Rama, yes, you can configure that in the "Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini". OPSAPS-41615 is still open, in the future you can ask the status from any of your account team contacts. If you don't know who are those contacts, please ask/clarify that through the already open support case. Best regards, Miklos
... View more
03-24-2022
01:29 AM
1 Kudo
Hello @ram76 , You can configure Hue to use the XFF header: [desktop]
use_x_forwarded_host=true See hue.ini reference: https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini If not already done, besides using an external load-balancer (like F5 - to let the end users remember only a single Hue login URL) please consider to add "Hue Load Balaner" role in CM > Hue service (which sets up an Apache httpd) to serve the static contents. See the following for more: https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/hue_use_add_lb.html#hue_use_add_lb Hope this helps. Best regards, Miklos
... View more
03-10-2022
12:42 AM
Hi @M129 , the error message is not too descriptive. Can you please check the HiveMetaStore logs what is the complete error message - and reason for the failure? Thanks Miklos
... View more