Member since
08-16-2019
41
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
711 | 06-03-2020 07:21 PM |
11-06-2020
08:55 AM
Hi @TimothySpann Could you please help here.
... View more
11-06-2020
07:00 AM
Hi All, I am running spark (livy) from nifi. Currently using HDP 3.0/HDF 3.1 the CDA version. I was running spark livy through nifi. I could see that nifi flow was completed successfully. But there were several livy-session keep executing in Yarn. I've tried to kill some of them but they were re-appearing in Yarn. This is causing issue that these livy sessions are consuming CPUs even though I am not running any spark application through livy-nifi. I am not sure how to resolve this issue. Needed your help here.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Spark
-
Apache YARN
06-03-2020
07:21 PM
There was access issue for the files event-processor.log, das-webapp.log. Gave access to those files, which resolved DAS WebUI issue.
... View more
06-03-2020
06:40 PM
Hi, There was an issue of replication of Hive metadata to DAS; I was trying to resolve it by resetting tables by taking reference of "https://docs.cloudera.com/HDPDocuments/DAS/DAS-1.2.1/troubleshooting/content/das_replication_failure_in_event_processor.html". After executing SQL commands successfully; I was trying to start DAS. Under which "Data Analytics Studio Event Processor" was started successfully and running but "Data Analytics Studio Webapp" was started successfully but not running. Below is the configuration detail HDP - 3.0.1 DAS - 1.0.2.0.0 Steps I used; 1) Stops DAS from Ambari 2) taken backup of /var/log/das log files and created empty files of event-processor.log, das-webapp.log files. 3) Executed reset tables SQL commands to das database in postgres 4) Started DAS from Ambari. Other observations - 1) After start of DAS nothing is written in event-processor.log, das-webapp.log files. 2) But can see das.db_replication_info was getting refreshed with id = 2 database_name = * last_replication_id, last_replication_start_time, last_replication_end_time, next_replication_start_time with valid values. Could you please help me here for fixing replication issue in DAS. Is there any separate instructions to fix replication issue in DAS 1.0.2.0.0, since I referred instructions to fix replication issue in DAS-1.2.1. Let me know if any other information is needed. Your reply is awaited; Thanks!
... View more
Labels:
05-31-2020
09:18 AM
1 Kudo
Hi @Shelton , Was you get chance to look into this issue, needed help on this. Thanks
... View more
03-19-2020
08:35 PM
Hi @Shelton Reason of this issue is https://docs.cloudera.com/HDPDocuments/DAS/DAS-1.4.4/troubleshooting/content/das_replication_failure_in_event_processor.html That I validated at my end, /var/log/das/event-processor.log is showing "Notification events are missing in the meta store" and /var/log/das/event-processor.log is showing replication is unsuccessful. I tried to follow instructions mentioned in above link to submit command curl -H 'X-Requested-By: das' -H 'Cookie: JSESSIONID=<session id cookie>' http(s) ://<hostname> :<port>/api/replicationDump/reset please refer screen print attached This command did not work, showing; {"code":404,"message": "HTTP 404 Not Found"} Could you please help here.
... View more
12-27-2019
06:12 AM
Hi @TimothySpann , Could you please let me know if you was able to resolve issue, since I am also facing similar issue. " Traceback (most recent call last): File "testhive.py", line 1, in <module> from pyhive import hive File "/usr/local/lib/python2.7/site-packages/pyhive/hive.py", line 10, in <module> from TCLIService import TCLIService File "/usr/local/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 9, in <module> from thrift.Thrift import TType, TMessageType, TFrozenDict, TException, TApplicationException ImportError: No module named thrift.Thrift Any ideas? "
... View more
11-25-2019
10:44 AM
@KWiseman Agreed, I did not have any other option to re-install HDP.
... View more
11-09-2019
10:48 PM
Hi cloudera Team, Following up for this issue;
... View more
10-07-2019
02:01 PM
I got the solution - I should send messages to sandbox-hdf.hortonworks.com and see those messages in HDF rather than HDP.
... View more
10-07-2019
01:02 PM
Hi @ManuelCalvo Yes it is connected;
... View more
10-07-2019
11:55 AM
Hi Cloudera Team, I am getting issue while sending message to Kafka topic from outside of sandbox, but same application in sandbox is able to send message to Kafka topic. Below is the screen print of these cases In First case I am trying to send test message to my Kafka topic; that application runs without giving any error. But at cluster location topic was not showing any messages In Second case when I ran the same application in sandbox messages were send and displayed in Kafka topic. Could you please assist me what wrong I am doing (or what extra I have to do) to make my First case successful.
... View more
Labels:
- Labels:
-
Apache Kafka
10-06-2019
09:30 AM
I tried using successfully my sandbox/shell in box from another pc by configuring my sandbox in bridged adapter mode.
... View more
10-03-2019
06:15 AM
Thanks @kwabstian53 for reply, In my case target database will be hive, could you please also guide me what will the driver name and any documentation for setting up that driver in HDP. I've referred link "http://www.sql-workbench.net/manual/jdbc-setup.html" but not sure which driver to be used.
... View more
10-03-2019
05:05 AM
Hi,
I am working on a project where we need to put data in Hive using Sqoop in HDP 3.0. That data is currently available on MySQL workbench which is running out side of HDP 3.0.
Could you please let me know the steps/tutorial/example so that I can achieve it.
Please note - I did not do any setup of Sqoop in HDP 3.0, not sure I need to install in Sqoop in HDP 3.0.
... View more
Labels:
- Labels:
-
Apache Sqoop
09-13-2019
06:45 AM
Hi @dstreev I wanted to use HTTPFS node in HDP3.0 to access HDFS, could you please let me know if there is any url or I need to install HTTPFS. If I need to install HTTPFS, are above mentioned steps are applicable for HDP3.0. Best regards,
... View more
09-13-2019
04:48 AM
Hi Team, Did anyone get chance to look into it. Thanks and Regards.
... View more
09-11-2019
09:23 AM
Hi, Could you please help me to know about the installation of "Livy for Spark2 Server" in HDP3.0. I've tried to follow link - https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.0/installing-spark/content/installing_spark_using_ambari.html, but not getting enough clue.
More detail - Wanted to integrate HDP 3.0 with KNIME (4.0.1) Analytics Platform. while using spark getting below error
below is the log file
2019-09-11 20:59:48,543 : ERROR : KNIME-Worker-3 : : GetContextsRequest : Create Spark Context (Jobserver) : 4:204 : HTTP Status code: 302 | Response Body: <html> <head> <title> Moved </title> </head> <body> <h1> Moved </h1> <div> Content has moved <a href="http://sandbox-hdp.hortonworks.com:8088/proxy/redirect/application_1567855906129_0004/contexts">here</a> </div> </body></html> 2019-09-11 20:59:48,545 : ERROR : KNIME-Worker-3 : : Node : Create Spark Context (Jobserver) : 4:204 : Execute failed: Spark Jobserver gave unexpected response For details see View > Open KNIME log.. Possible reason: Incompatible Jobserver version, malconfigured Spark Jobserver org.knime.bigdata.spark.core.exception.KNIMESparkException: Spark Jobserver gave unexpected response For details see View > Open KNIME log.. Possible reason: Incompatible Jobserver version, malconfigured Spark Jobserver at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.createUnexpectedResponseException(AbstractJobserverRequest.java:164) at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.handleGeneralFailures(AbstractJobserverRequest.java:133) at org.knime.bigdata.spark.core.sparkjobserver.request.GetContextsRequest.sendInternal(GetContextsRequest.java:62) at org.knime.bigdata.spark.core.sparkjobserver.request.GetContextsRequest.sendInternal(GetContextsRequest.java:1) at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.send(AbstractJobserverRequest.java:72) at org.knime.bigdata.spark.core.sparkjobserver.context.JobserverSparkContext.remoteSparkContextExists(JobserverSparkContext.java:279) at org.knime.bigdata.spark.core.sparkjobserver.context.JobserverSparkContext.open(JobserverSparkContext.java:197) at org.knime.bigdata.spark.core.context.SparkContext.ensureOpened(SparkContext.java:144) at org.knime.bigdata.spark.node.util.context.create.SparkContextCreatorNodeModel.executeInternal(SparkContextCreatorNodeModel.java:115) at org.knime.bigdata.spark.core.node.SparkNodeModel.execute(SparkNodeModel.java:240) at org.knime.core.node.NodeModel.executeModel(NodeModel.java:567) at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1192) at org.knime.core.node.Node.execute(Node.java:979) at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559) at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95) at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:179) at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:110) at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:328) at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:204) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123) at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246) 2019-09-11 20:59:57,919 : ERROR : KNIME-Worker-4 : : GetContextsRequest : Create Spark Context (Jobserver) : 4:204 : HTTP Status code: 302 | Response Body: <html> <head> <title> Moved </title> </head> <body> <h1> Moved </h1> <div> Content has moved <a href="http://sandbox-hdp.hortonworks.com:8088/proxy/redirect/application_1567855906129_0004/contexts">here</a> </div> </body></html> 2019-09-11 20:59:57,920 : ERROR : KNIME-Worker-4 : : Node : Create Spark Context (Jobserver) : 4:204 : Execute failed: Spark Jobserver gave unexpected response For details see View > Open KNIME log.. Possible reason: Incompatible Jobserver version, malconfigured Spark Jobserver org.knime.bigdata.spark.core.exception.KNIMESparkException: Spark Jobserver gave unexpected response For details see View > Open KNIME log.. Possible reason: Incompatible Jobserver version, malconfigured Spark Jobserver at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.createUnexpectedResponseException(AbstractJobserverRequest.java:164) at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.handleGeneralFailures(AbstractJobserverRequest.java:133) at org.knime.bigdata.spark.core.sparkjobserver.request.GetContextsRequest.sendInternal(GetContextsRequest.java:62) at org.knime.bigdata.spark.core.sparkjobserver.request.GetContextsRequest.sendInternal(GetContextsRequest.java:1) at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.send(AbstractJobserverRequest.java:72) at org.knime.bigdata.spark.core.sparkjobserver.context.JobserverSparkContext.remoteSparkContextExists(JobserverSparkContext.java:279) at org.knime.bigdata.spark.core.sparkjobserver.context.JobserverSparkContext.open(JobserverSparkContext.java:197) at org.knime.bigdata.spark.core.context.SparkContext.ensureOpened(SparkContext.java:144) at org.knime.bigdata.spark.node.util.context.create.SparkContextCreatorNodeModel.executeInternal(SparkContextCreatorNodeModel.java:115) at org.knime.bigdata.spark.core.node.SparkNodeModel.execute(SparkNodeModel.java:240) at org.knime.core.node.NodeModel.executeModel(NodeModel.java:567) at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1192) at org.knime.core.node.Node.execute(Node.java:979) at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559) at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95) at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:179) at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:110) at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:328) at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:204) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123) at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246) 2019-09-11 21:02:48,942 : ERROR : KNIME-Worker-5 : : GetContextsRequest : Create Spark Context (Jobserver) : 4:204 : HTTP Status code: 302 | Response Body: <html> <head> <title> Moved </title> </head> <body> <h1> Moved </h1> <div> Content has moved <a href="http://sandbox-hdp.hortonworks.com:8088/proxy/redirect/application_1567855906129_0004/contexts">here</a> </div> </body></html> 2019-09-11 21:02:48,942 : ERROR : KNIME-Worker-5 : : Node : Create Spark Context (Jobserver) : 4:204 : Execute failed: Spark Jobserver gave unexpected response For details see View > Open KNIME log.. Possible reason: Incompatible Jobserver version, malconfigured Spark Jobserver org.knime.bigdata.spark.core.exception.KNIMESparkException: Spark Jobserver gave unexpected response For details see View > Open KNIME log.. Possible reason: Incompatible Jobserver version, malconfigured Spark Jobserver at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.createUnexpectedResponseException(AbstractJobserverRequest.java:164) at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.handleGeneralFailures(AbstractJobserverRequest.java:133) at org.knime.bigdata.spark.core.sparkjobserver.request.GetContextsRequest.sendInternal(GetContextsRequest.java:62) at org.knime.bigdata.spark.core.sparkjobserver.request.GetContextsRequest.sendInternal(GetContextsRequest.java:1) at org.knime.bigdata.spark.core.sparkjobserver.request.AbstractJobserverRequest.send(AbstractJobserverRequest.java:72) at org.knime.bigdata.spark.core.sparkjobserver.context.JobserverSparkContext.remoteSparkContextExists(JobserverSparkContext.java:279) at org.knime.bigdata.spark.core.sparkjobserver.context.JobserverSparkContext.open(JobserverSparkContext.java:197) at org.knime.bigdata.spark.core.context.SparkContext.ensureOpened(SparkContext.java:144) at org.knime.bigdata.spark.node.util.context.create.SparkContextCreatorNodeModel.executeInternal(SparkContextCreatorNodeModel.java:115) at org.knime.bigdata.spark.core.node.SparkNodeModel.execute(SparkNodeModel.java:240) at org.knime.core.node.NodeModel.executeModel(NodeModel.java:567) at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1192) at org.knime.core.node.Node.execute(Node.java:979) at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559) at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95) at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:179) at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:110) at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:328) at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:204) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123) at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
09-07-2019
12:14 PM
Hi, Just wanted to know if we can setup and use KNIME (A visual data analytics tool). I've tried to install in HDP 3.0 with the help of link "https://docs.knime.com/2018-12/server_installation_guide/index.html". Installation was successful but when I was trying to use link sandbox-hdf.hortonworks.com:8090, it was not connecting. In error log file I could see that "tomcat" was not able to connect localhost:8005. Not sure if this was the reason. Please let me know if anyone used KNIME in HDP sandbox. Or any Visual Data Analytics tool you have used. Thanks.
... View more
Labels:
- Labels:
-
Ambari Blueprints
09-03-2019
09:58 AM
Hi, I am not seeing any error in /var/log/das/das-webapp.log.
... View more
08-24-2019
08:21 PM
Hi @cfarnes Thanks for reply, I gave 28GB RAM to Virtual box to run CDA.
... View more
08-23-2019
06:48 PM
Hi,
I am facing an issue:
hive table created in data analytics studio is not listed under tables (refer below screen print, tables listed under "Search Tables" text box)
But when doing select that table is shown in results (refer below screen print, under results)
Could you please let me know what is the issue here and how it can be corrected.
More information - I am using HDP 3.0 sandbox with CDA enabled.
... View more
Labels:
- Labels:
-
Apache Hive
-
Data Analytics Studio
08-23-2019
11:57 AM
Hi Development Team, Could you please let me know if any one looked at this issue,
... View more
08-21-2019
11:05 AM
Hi Hortonworks Team,
We are getting error "DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020" during pip install. I can see that HDP3.0/HDF3.2 are having python 2.7. My question is post January 1st, 2020. will HDP3.0/HDF3.2 stop working or we will not be able to install any python library through pip or there will be new release of HDP/HDF sandboxes with python version > 2.7.
... View more
Labels:
08-19-2019
06:08 PM
Hi @cstefan , Could you please check and let us know the value in "HIVE_WAREHOUSE_CONNECTOR" And hive parameters setting in Spark2.
... View more
08-13-2019
05:27 PM
Hi @dbompart, Thanks for your reply; I've tried below: 1) Changed the Zeppelin setting per below 2) Restarted notebook 3) Tried below code in notebook and getting below import error. Requesting to assist here. Thanks and Regards.
... View more
08-13-2019
04:00 AM
Hi @dbompart, Thanks for the answer, I am using HDP3.1, I've tried to change the settings per link "https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/integrating-hive/content/hive_configure_a_spark_hive_connection.html" 1) Spark setting below 2) Trying to get hive databases in spark - no success; 3) Can see hive databases in hive Could you please assist me on this, what else needs to be done.
... View more
08-13-2019
12:18 AM
I am using HDP 3.1.0
... View more
07-20-2019
05:56 PM
Please note I am using HDP 3.0 version
... View more