Member since
09-29-2015
5243
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2653 | 07-13-2022 07:05 AM | |
| 5693 | 08-11-2021 05:29 AM | |
| 3405 | 07-07-2021 01:04 AM | |
| 3032 | 07-06-2021 03:24 AM | |
| 4928 | 06-07-2021 11:12 PM |
08-25-2020
12:45 AM
1 Kudo
Hello @P_Rat98 , thank you for raising the question about the red dot. Please see this thread that might answer your inquiry. In short: it is a "non-breaking space character to the file that displays as a red dot". Please let us know if it helped by pressing the "Accept as Solution" button. Should you need further information, please do not hesitate to reach out to the Community. Best regards: Ferenc
... View more
08-03-2020
07:51 AM
Hello @MeenaK , thank you for reaching out. This community thread mentions, that "The ConvertJsonToAvro processor was removed from the default NiFi distribution bundle because of space limitations as of the Apache NiFi 1.10 release.". this thread points to a repo of the Apache Nifi. Hope it helps! Kind regards: Ferenc
... View more
07-31-2020
01:58 AM
Hello @SAMSAL , Thanks for your reply and the additional information. What I would personally do is to keep collecting the data and note down the encodings detected to see if there is a pattern and do some research based on it. Since the issue is not caused by any Cloudera product defects, we’ve reached the limit of what assistance can be provided via Community support. This thread will remain open if other community peers want to contribute. If you’re a Cloudera Subscription Support customer, we can connect you with your Account team to explore a possible Services engagement for this request. Let us know if you’re interested in this path, we’ll private message you to collect more information. Thank you: Ferenc
... View more
07-30-2020
01:36 AM
Hello @AdityaShaw , sorry I have missed addressing your enquiry about the server 500 error. Based on the KB article you referenced, it is related to the bug. Do you still have the server 500 error once hive.server2.thrift.http.cookie.auth.enabled is set to false, please? Should you experience disconnects even after applying the workaround, you might have too many HTTPClient connections, which are too high for KNOX to handle at a time. Please only change any of the parameters listed below once you tested that the above workaround was not sufficient. Based on my research the below parameters are still safe to apply. Before applying any changes in production, please test out the new values in a non-prod environment, as we are not familiar with your use cases and how your cluster/workload is designed: gateway.httpclient.connectionTimeout=600000 (10 min) gateway.httpclient.socketTimeout=600000 (10 min) gateway.metrics.enabled=false gateway.jmx.metrics.reporting.enabled=false gateway.httpclient.maxConnections=128 Kind regards: Ferenc
... View more
07-30-2020
01:14 AM
Hello @SAMSAL , thank you for reaching out with your issue about parsing a CSV with SplitRecord processor. The described behaviour sounds like an encoding issue. Have you tried to identify what is the encoding of your source data, please? I've just quickly googled for "identify encoding" and found this tool, which might works. (Have not tested it, so feel free to browse in this topic). Once you know what is the character encoding for the source data, please set your "Character Set" in NiFi accordingly. Please let us know if it helped by pressing the "Accept as Solution" button. Kind regards: Ferenc
... View more
07-29-2020
04:49 AM
Hello @AdityaShaw , thank you for inquiring about the performance impact on disabling hive.server2.thrift.http.cookie.auth.enabled. I have done some research in our internal jira and support cases, however have not seen any performance issue reported. Please note, this bug is now fixed as part of HIVE-22841 and will be available in a future HDP release. Thank you: Ferenc
... View more
07-15-2020
06:17 AM
Hello @Raj78 , thank you for enquiring about how to set up Livy against CDH. Please note, Livy is supported for CDP, however it is not supported on CDH6. The main reason of a product not being supported in a certain version is that it is not production ready for those releases. Please evaluate our CDP product. Please find here the documentation for configuring Livy ACLs on CDP. Please let us know if you need more information on this topic. Best regards: Ferenc
... View more
07-15-2020
12:32 AM
Hello @davidla , thank you for watching our Cloudera OnDemand materials. Currently we do not provide the option of downloading the videos. Best regards: Ferenc
... View more
07-14-2020
06:42 AM
1 Kudo
Hello @ashish_inamdar , based on the stack trace you shared with us: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: relation "metainfo" does not exist
Error Code: 0
Call: SELECT "metainfo_key", "metainfo_value" FROM metainfo WHERE ("metainfo_key" = ?)
bind => [1 parameter bound]
Query: ReadObjectQuery(name="readMetainfoEntity" referenceClass=MetainfoEntity sql="SELECT "metainfo_key", "metainfo_value" FROM metainfo WHERE ("metainfo_key" = ?)") It means that the metainfo table does not exist. Did you follow the installation steps, please? I've found the same issue was resolved in this thread Quoting: "I used yum to uninstall ambari and associated packages, including postgres. Then I re-installed Ambari using yum and ran ambari-server install." Please let us know if it helped! Kind regards: Ferenc
... View more
06-25-2020
06:03 AM
Hello @NumeroUnoNU , I've run the "alternatives --list" command on a cluster node and noticed that there is a "hadoop-conf" item, which points to a directory that has the hdfs-site.xml location. You can also discover it by: "/usr/sbin/alternatives --display hadoop-conf". This lead to me to google for "/var/lib/alternatives/hadoop-conf" and found this Community Article reply, which I believe answers your question. In short if you have e.g. gateway roles deployed for HDFS on a node, you will find the up-to-date hdfs-site.xml in /etc/hadoop/conf folder... We have a little bit diverged from the original topic in this thread. To make the conversation easier to read for future visitors, would you mind open a new thread for each major topics, please? Please let us know if the above information helped you by pressing the "Accept as Solution" button. Best regards: Ferenc
... View more