Member since
09-29-2015
4612
Posts
21
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
752 | 08-11-2021 05:29 AM | |
587 | 07-07-2021 01:04 AM | |
349 | 07-06-2021 03:24 AM | |
983 | 06-07-2021 11:12 PM | |
316 | 06-02-2021 04:06 AM |
05-27-2020
05:46 AM
Hello @Paul Yang , thank you for showing interest in downloading the free trial for CDF. The download page can be found under [1]. Intro about the CDF product is under [2]. To download the product you will need to register on our website. A pop-up will show up to register or to log in when you select the product to be downloaded, so you do not need to worry about where to register. Please let us know if this is what you were looking for! Thank you: Ferenc [1] https://www.cloudera.com/downloads/cdf.html [2] https://www.cloudera.com/products/cdf.html
... View more
05-25-2020
09:35 AM
Hello @BSST , it is great to hear that you can add hosts on CDP now without issue and letting us know what did you do to overcome the problem. It will help other Community Members to follow your solution in a similar situation. Regarding to the "Error 404 command has no file attached" message: - first of all, thank you for reporting it - can you please share the steps to reproduce this message, please? Does it occur when you change the scope by any chance? Do you get the expected results apart from the error message displayed, please? Kind regards: Ferenc
... View more
05-25-2020
09:23 AM
Hello @Johnny_Bach , thank you for letting us know that you had issues deploying client configurations on CDP. I have researched this topic and there is a good chance you might have problems with the alternatives. For your convenience pasting here Ben's solution [1] to this problem: Try this on your nodes: /usr/sbin/alternatives --display hadoop-conf If that command does not return valid alternatives, you may be missing the following file: /var/lib/alternatives/hadoop-conf If you can find a host with the file, copy it from that host to the host or hosts where it is missing. Please let us know if it resolved your issue. Kind regards: Ferenc [1] https://community.cloudera.com/t5/Support-Questions/Cloudera-manger-HBase-Deploy-Client-Configuration-fails/m-p/45233/highlight/true#M38481
... View more
05-20-2020
06:44 AM
Hello @BSST , can you please check if you get an Error 113 ('No route to host') in /var/log/cloudera-scm-agent/cloudera-scm-agent.log and follow instructions under [1] to resolve the issue? Furthermore please make sure you follow the Network and Security Requirements under [2]. I have checked internally and the section under "CDH and Cloudera Manager Networking and Security Requirements" applies to CDP, however please ignore the " Users and Groups" table, as these will be taken care of the automated installation. Just make sure the created users and groups are not modified/removed by anyone or any automation. The doc that will cover this section for CDP is on it's way for publication (thanks to your input on this missing piece!). To answer your enquiry what does CM HA means: Cloudera Manager High Availability. Thank you: Ferenc [1] https://docs.cloudera.com/cdpdc/7.0/installation/topics/cdpdc-troubleshooting-installation.html [2] https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_network_and_security_requirements.html
... View more
05-19-2020
05:37 AM
Hello @BSST , thank you for raising the question regarding the “Null value(s) passed to lookup by non-nullable natural-id” exception shown once the CM Agents reported being installed successfully. There is a known issue reported for the trial version for CM HA mode that the agent is being reported as installed pre-maturely. This issue is not present in a non-HA environment. My understanding is that the agent would be installed after all. Can you please confirm that you managed to install the agents successfully despite the odd message and if you are running in CM HA mode, please? We recommend to use the trial version in a non-HA mode. Thank you: Ferenc
... View more
05-18-2020
01:03 AM
Hello @abhinav_joshi , thank you for reaching out to us with the enquiry on how to file a Support Case with Cloudera. With an Enterprise Subscription [1] you can file an online case after registering on the Support Portal [2]. For most the up-to-date information about Enterprise Subscription, please contact Sales [3]. Should you have no Enterprise Subscription, we are encouraging you to keep interacting with our Community to find and provide solutions to members. Please let us know if your enquiry been addressed. Thank you: Ferenc [1] https://my.cloudera.com/faq.html#support [2] https://sso.cloudera.com/ [3] https://www.cloudera.com/contact-sales.html
... View more
05-18-2020
12:36 AM
Hello @BSST , thank you for updating us that cleaning the local client cache for the yum repositories did not resolve your issue connecting to Red Hat CDN. Did you go through the "Known Issues" section in the Red Hat KB [1], please? Should the instructions in under "Known Issues" section did not help, please file a Support Ticket with Red Hat for further troubleshooting. Please let us know once you overcome the 404 issue and you hit a problem again! Best regards: Ferenc [1] https://access.redhat.com/articles/1320623
... View more
05-15-2020
09:35 AM
Hello @yramesh , thank you for raising your issue hitting 401 exception. It means authentication issue. When you navigate to the link via browser where you wanted to download the repodata, it prompts for username and password. It means only subsciption customers can access this file. For C7 there are files in 2 different repos. One for paying customers one for trial. https://archive.cloudera.com/cm7/7.0.3/redhat7/yum/cloudera-manager-trial.repo is the free trial .repo file to download. For subscription customers to identify authentication credentials using the license key file, the below steps should be followed [1]: From cloudera.com, log into the cloudera.com account associated with the CDP Data Center license and subscription agreement. On the CDP Data Center Download page, click Download Now and scroll down to the Credential Generator. In the Generate Credentials text box, copy and paste the text of the “PGP Signed Message” within the license key file and click Get Credentials. The credentials generator returns the username and password. Please let me know if you hit any different issue. Best regards: Ferenc [1] https://docs.cloudera.com/cdpdc/7.0/installation/topics/cdpdc-cm-download-information.html
... View more
05-15-2020
07:21 AM
Hello @BSST , thank you for letting us know that the RedHat repo cannot be accessed. Please follow [1] and if it does not help to resolve the repository issue, kindly reach out to RedHat Support for further troubleshooting. Please let us know if you managed to overcome this obstacle. Kind regards: Ferenc [1] https://access.redhat.com/articles/1320623
... View more
05-14-2020
05:49 AM
Hello @galzoran , thank you for raising the question about how to do a rolling upgrade from CDH5.13.1 to CDH6.0.1 and still use Phoenix. The documentation describes that the requirement for Phoenix is to use Cloudera Manager 6.2. or later [1] and [2] describes that you need to have CDH6.2. to run Phoenix. [3] highlights that rolling upgrade is only possible when upgrading to a minor version. Upgrade from CDH5 to CDH6 is a major version change. It needs a full cluster restart. Please let me know if the above answers your enquiry. Thank you: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/phoenix_installation.html [2] https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/phoenix_prerequisites.html [3] https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_cdh_upgrade.html
... View more
05-13-2020
01:55 AM
1 Kudo
Hello @BSST , this page details about memory and storage recommendations per host: https://docs.cloudera.com/cdpdc/7.0/release-guide/topics/cdpdc-hardware-requirements.html Is it what you were looking for, please? Thank you: Ferenc
... View more
05-12-2020
06:18 AM
Hello @BSST , thank you for reaching out and raising this concern about missing link to Runtime and Cloudera Manager Networking and Security Requirements. I have found the referenced link [2] for and earlier Cloudera Manager version (CM6.2) [1]. Let me action this internally and update this thread once we fixed the issue with the missing documentation for the CDP version. Thank you again! Kind regards: Ferenc https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/cm_ig_non_production.html https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_network_and_security_requirements.html#cdh_cm_network_security
... View more
05-06-2020
07:57 AM
1 Kudo
Hello @matagyula , thank you for your feedback on the proposed actions and for accepting the reply as the solution! It will help Community Members facing with similar issues to find the answer faster. Üdvözlettel: Ferenc
... View more
05-06-2020
02:02 AM
1 Kudo
Hello @matagyula , thank you for sharing with us the exceptions you are getting after enabling for "Kerberos Authentication for HTTP Web-Consoles" for YARN. You will need to configure SPNEGO [1] and enable authentication for HDFS too [2] to overcome the issues described. Please let us know if the proposed changes resolved your issue! Thank you: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/latest/topics/cdh_sg_browser_access_kerberos_protected_url.html [2] CM -> HDFS service -> search for and enable "Enable Kerberos Authentication for HTTP Web-Consoles", deploy client configuration, restart HDFS and YARN services
... View more
05-04-2020
02:27 AM
Hello @Sergiete , thank you for raising our attention that there was an improvement point in Cloudera CDP documentation. We have added clarification in our CDP documentation about "Installing Hive on Tez" [1] and that the Spark execution engine is not supported, as it's been replaced by Tez [2]. Based on our updated documentation [1] the correct order of installing the Hive service is: Install the Hive service, designated Hive on Tez in CDP. HiveServer is installed automatically during this process. Install HMS, which is designated Hive. Best regards: Ferenc [1] https://docs.cloudera.com/runtime/7.0.3/hive-introduction/topics/hive_installing_on_tez.html [2] https://docs.cloudera.com/runtime/7.0.3/hive-introduction/topics/hive-unsupported.html
... View more
04-21-2020
11:46 AM
1 Kudo
Hello @sh1vam , thank you for reporting that your HiveServer2 is not coming up and throwing the below exception: exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:578) ... Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:226) registerAllFunctionsOnce ... Caused by: javax.jdo.JDOUserException: ... org.datanucleus.store.query.QueryNotUniqueException: The query returned more than one instance BUT either unique is set to true or only aggregates are to be returned, so should have returned one result maximum at ... at org.apache.hadoop.hive.metastore.ObjectStore.getMRole(ObjectStore.java:4091) I found that your issue is resolved now. Therefore I would like to add here the strategy that might be helpful for someone to troubleshoot similar issues: Start reading the stacktrace from bottom-up. Read the last "Caused by:" section first. We can see "QueryNotUniqueException". Now start reading the lines after the Caused by to see if you can find a some meaningful classname that is coming from hive. The first one is: org.apache.hadoop.hive.metastore.ObjectStore.getMRole(ObjectStore.java:4091) Google "org.apache.hadoop.hive.metastore.ObjectStore.getMRole" for the source code. The very first match was this one, which was OK for our purpose. The number in brackets is the line number that we should look up. Since we do not know the exact version you are using, we just search for the term getMRole in the hope we will get the clue what it is. Based on the context it seems to be a Metastore Role that we are trying to fetch. So far we know that a metastore role is not unique. Now we start reading further the caused by sentences and find that this caused that the SessionHiveMetaStoreClient could not be instantiated. Based on your report you found out that the metastore.ROLES table had two admin roles and this was causing the issue and you were using the below commands for your troubleshooting: mysql> select * from metastore.ROLES; You identified that the role with id 3 is not required, hence deleted the role: mysql> delete from metastore.ROLES where ROLE_ID=3; In production environment, please do not forget to create a backup or to have a means of recovery before issuing delete command. Kind regards: Ferenc
... View more
04-21-2020
11:26 AM
Hello @SwasBigData , just would like to check with you if you tried the below: remove HiveServer2 then install Hive service first (without choosing HS2), and then install Hive on Tez. Did it work? Thank you for pointing out the lack of documentation. Following up this internally. Kind regards: Ferenc
... View more
04-21-2020
07:07 AM
1 Kudo
Hello @sinhapiyush86 , thank you for raising the question about getting Unsupported JDBC protocol: 'null' exception in PySpark. Please make sure you have initialised HWC in the session, otherwise you will get the below exception: java.lang.RuntimeException: java.lang.IllegalArgumentException: Unsupported JDBC protocol: 'null' You can initialise HWC by the below code segment [1]: from pyspark_llap import HiveWarehouseSession hive = HiveWarehouseSession.session(spark).build() Please let us know if it resolved your issue. Thank you: Ferenc [1] https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/integrating-hive/content/hive_hivewarehousesession_api_operations.html
... View more
04-21-2020
06:59 AM
Hello @SwasBigData , thank you for raising the question about why HiveServer2 is failing to start in CDP. This issue occurs because HiveServer2 has been moved outside of Hive Service in CDP. To resolve this issue, simply remove HiveServer2 from Hive Service and install Hive on Tez service. Tez is the only engine that is supported in Hive in CDP environment. Please let us know if we answered the enquiry by accepting the answer as a solution. Best regards: Ferenc
... View more
04-21-2020
06:47 AM
Hello @gtmanoj23051988 , thank you for the detailed issue description. It seems to be similar that was described in this thread, hence just summarising here the solution for your convenience: Impala/Hive Driver tries to transform queries that were already in the Impala/Hive native form when UseNativeQuery is set to zero or not set at all causing the syntax error. You can overcome this exception by adding UseNativeQuery=1 in the JDBC query string parameter. Kind regards: Ferenc
... View more
04-20-2020
07:37 AM
1 Kudo
Hello @amol_08 , thank you for raising your question about why a hive select query with limit fails, while without limit isn't. Can you please specify the Hadoop distribution and the version you are using? E.g. CDH5.16, HDP3.1. what is the platform you are using, please? E.g. Hive, HiveServer2, Hive LLAP? I am asking these clarification questions to rule out any known issue you might hit. For this general problem statement I would like to raise your attention to our Cloudera Documentation [1] that describes the same type of query of "SELECT * FROM <table_name> LIMIT 10;" that will cause all partitions of the target table loaded into memory if the table was partitioned resulting memory pressure and how to tackle this issue. Please let us know if the referenced documentation addresses your enquiry by accepting this post as a solution. Thank you: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/latest/topics/admin_hive_tuning.html#hs2_identify_workload_characteristics
... View more
04-20-2020
06:59 AM
Hello @gmalafsky , thank you for raising this question about how to configure Impala JDBC driver from a Windows machine. Although the original question was raised some time ago, I would like to update this thread with the latest information. For the latest Impala JDBC driver release the installation guide can be found under [1] in a PDF format. Page 8. describes that "Before you use the Cloudera JDBC Driver for Impala, the JDBC application or Java code that you are using to connect to your data must be able to access the driver JAR files. In the application or code, specify all the JAR files that you extracted from the ZIP archive." For Java7, please follow the guide under [2] to configure the classpath correctly. For the detailed instructions, please follow [1] section "Installing and Using the Cloudera JDBC Driver for Impala". For release notes, please navigate to [3]. Please let us know if there is any additional information is required for this thread to be marked as solved. Kind regards: Ferenc [1] https://docs.cloudera.com/documentation/other/connectors/impala-jdbc/latest/Cloudera-JDBC-Driver-for-Impala-Install-Guide.pdf [2] http://docs.oracle.com/javase/7/docs/technotes/tools/windows/classpath.html [3] https://docs.cloudera.com/documentation/other/connectors/impala-jdbc/latest.html
... View more
04-15-2020
05:46 AM
1 Kudo
Hello @mohanrajank , thank you for your feedback that you are facing with a different Oozie issue after increasing the heap for the service. It means that your original issue (low heap) is fixed now. It is not unusual that you might hit multiple issues and we need to resolve each obstacle one-by-one. I understand that a simple agent restart did not help. Sometimes you need to issue a "hard restart" of the agent. Be careful, it kills all the CM/CDH processes on the node, so please do not use it while there is a business critical process is running. Please follow the instructions of [1] on how to proceed with the hard stop and start. We have researched the issue with @lwang and the public hostname will be picked from the below script, which is generated by the agent: ## /opt/cloudera/cm-agent/service/oozie/oozie.sh export OOZIE_HTTP_HOSTNAME=$oozie_http_hostname We believe that the agent (hard) restart should help, because the above script is created by the agent itself. Should the hard restart did not fix your issue, we will check the forward and reverse DNS resolution for the hostname displayed in the exception to rule out DNS misconfiguration. Please let us know your findings! Thank you: Ferenc [1]https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/cm_ag_agents.html#cmug_topic_14_4__section_kmt_zxs_v4
... View more
04-14-2020
02:12 AM
Hello @Rups , thank you for marking my earlier post as a solution about suggesting setting " SSL Context Service" parameter to resolve the issue. It is motivating to know when a suggestion actually helps. 🙂 Do I understand correctly that your workflow is operating now as expected? Thanks again: Ferenc
... View more
04-07-2020
01:49 AM
Hello @mahfooz , thank you for raising your question about Hive's ACID support on CDH5.16. Based on [1] and, no Hive currently does not support ACID transactions. Cloudera recommends using the Parquet file format, which works across many tools. Kind regards: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/hive_ingesting_and_querying_data.html#hive_transaction_support
... View more
04-07-2020
01:24 AM
@Rups can you please check if you provided all the required parameters from [1], especially check for the " SSL Context Service" parameter. [1] https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.InvokeHTTP/
... View more
04-07-2020
01:17 AM
Hello @Rups , thank you for asking this question about NiFi InvokeHTTP.setUpClient throwing IndexOutOfBoundsException. Can you please specify your exact NiFi version, so I can look up the source code. If the source code of your NiFi version is the same as the code I found, the exception was coming from this line: https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java#L665 My first interpretation is that the issue might be related to SSL. Is SSL fully enabled and configured in your environment, please? Thank you: Ferenc
... View more
04-07-2020
12:54 AM
1 Kudo
Hello @mohanrajank, based on the stderr message you posted originally I've noticed that the Oozie server got a maximum 52Mb heap assigned to it: exec /usr/lib/jvm/java-8-oracle-cloudera/bin/java -Xms52428800 -Xmx52428800 It is too low to start the service by my opinion. Please set it to e.g. 1Gb in CM -> Oozie -> Configurations -> Search for " Java Heap Size of Oozie Server in Bytes" and attempt to start the service again. Please let us know if it resolved your issue by clicking on "Accept as Solution". Should you need further troubleshooting tips, please do not hesitate to let us know. Thank you: Ferenc
... View more
04-06-2020
09:42 AM
1 Kudo
Hello @BVNP , thank you for raising the question about how to check if CM is running on a host. Do I understand correctly that you can ssh into the host where CM is deployed, please? You can use for example the following commands to verify if CM is up and running: - "cat /var/run/ cloudera-scm-server.pid" and then use the ps command to see if the process with the PID is running - CURL the CM Web UI with "curl -k https://<cm-host>:7183/cmf/login". If TLS/SSL is not enabled for CM, you can use "curl -k http://<cm-host>:7181/cmf/login". This should return the HTML for the CM login page. - tail the CM Server log " tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log" Please let us know if you need further troubleshooting tips. Thank you: Ferenc
... View more
04-03-2020
02:15 AM
1 Kudo
Based on the Cloudera documentation [1] you need parcels for rolling upgrades. Rolling upgrade means less downtime/disruption in your cluster caused by the upgrade process. You can migrate package installation to parcel based installation following the instructions under [2]. For further information about upgrading your CDH, please follow instructions under [3]. [1] https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_ig_managing_software.html [2] https://docs.cloudera.com/documentation/enterprise/latest/topics/cm_ig_migrating_packages_to_parcels.html [3] https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_cdh_upgrade.html
... View more
- « Previous
- Next »