Member since
08-07-2017
144
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2222 | 03-05-2019 12:48 AM | |
9310 | 11-06-2017 07:28 PM |
05-17-2021
11:15 PM
you found the solution for this?
... View more
10-04-2019
10:51 AM
This happens with me also. Amazingly, it gives the exact same timeout ms. [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
... View more
08-12-2019
02:09 PM
Hi All, I am facing the same error , I am trying to connect from the ODBC driver. Do you have any idea , how you resolve this issue. i am getting the error in odbc as Failed to initialize security context: No authority could be contacted for authentication. and in hiveserver2 log.Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream Thank you, Thanks & Regards, Siva
... View more
06-10-2019
07:35 AM
Hi, Did you tried changing the value of the property spark.rpc.askTimeout to higher value and try submitting the job again.
... View more
03-19-2019
07:39 AM
Hi Priya, We understand that you had set the swappiness value to 10. But Cloudera recommends that you set vm.swappiness to a value between 1 and 10, preferably 1, for minimum swapping. The higher the value, the more aggressively inactive processes are swapped out from physical memory. The lower the value, the less they are swapped, forcing filesystem buffers to be emptied. Thanks AK
... View more
03-05-2019
05:35 AM
Thanks for sharing! Feel free to mark your last reply as the solution by clicking the accept as solution button.
... View more
01-31-2019
10:31 PM
Hi Team , can you please help on below error. connection reset by peer error. Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead. 19/01/31 18:43:18 INFO spark.SparkContext: Running Spark version 2.2.0.cloudera1 19/01/31 18:43:19 WARN spark.SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN). 19/01/31 18:43:19 INFO spark.SparkContext: Submitted application: Prime_CEP_BGFR_1309_Process Rates/Other Errors 19/01/31 18:43:19 INFO spark.SecurityManager: Changing view acls to: ggbmgphdpngrp 19/01/31 18:43:19 INFO spark.SecurityManager: Changing modify acls to: ggbmgphdpngrp 19/01/31 18:43:19 INFO spark.SecurityManager: Changing view acls groups to: 19/01/31 18:43:19 INFO spark.SecurityManager: Changing modify acls groups to: 19/01/31 18:43:19 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ggbmgphdpngrp); groups with view permissions: Set(); users with modify permissions: Set(ggbmgphdpngrp); groups with modify permissions: Set() 19/01/31 18:43:19 INFO util.Utils: Successfully started service 'sparkDriver' on port 50000. 19/01/31 18:43:19 INFO spark.SparkEnv: Registering MapOutputTracker 19/01/31 18:43:19 INFO spark.SparkEnv: Registering BlockManagerMaster 19/01/31 18:43:19 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/01/31 18:43:19 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/01/31 18:43:19 INFO storage.DiskBlockManager: Created local directory at /PBMG/users/ggbmgphdpngrp/prime/cep/tmp/blockmgr-88cc1ce5-d255-4009-9864-25e5f567879e 19/01/31 18:43:19 INFO memory.MemoryStore: MemoryStore started with capacity 6.2 GB 19/01/31 18:43:20 INFO spark.SparkEnv: Registering OutputCommitCoordinator 19/01/31 18:43:20 INFO util.log: Logging initialized @2402ms 19/01/31 18:43:20 INFO server.Server: jetty-9.3.z-SNAPSHOT 19/01/31 18:43:20 INFO server.Server: Started @2475ms 19/01/31 18:43:20 INFO server.AbstractConnector: Started ServerConnector@3ad394e6{HTTP/1.1,[http/1.1]}{0.0.0.0:52000} 19/01/31 18:43:20 INFO util.Utils: Successfully started service 'SparkUI' on port 52000. 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26f143ed{/jobs,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61a5b4ae{/jobs/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5b69fd74{/jobs/job,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77b325b3{/jobs/job/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e8e8651{/stages,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@271f18d3{/stages/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61e3a1fd{/stages/stage,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@315df4bb{/stages/stage/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5cad8b7d{/stages/pool,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@25243bc1{/stages/pool/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e6ee0bc{/storage,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@467f77a5{/storage/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@420bc288{/storage/rdd,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@308a6984{/storage/rdd/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a34b7b8{/environment,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3be8821f{/environment/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b65e559{/executors,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74a9c4b0{/executors/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1c05a54d{/executors/threadDump,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5fd9b663{/executors/threadDump/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10567255{/static,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60b85ba1{/,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@117632cf{/api,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@159e366{/jobs/job/kill,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24528a25{/stages/stage/kill,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.6.209.22:52000 19/01/31 18:43:20 INFO spark.SparkContext: Added JAR file:/PBMG/users/ggbmgphdpngrp/prime/cep/prime-cep.jar at spark://10.6.209.22:50000/jars/prime-cep.jar with timestamp 1548956600301 19/01/31 18:43:20 INFO util.Utils: Using initial executors = 15, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/01/31 18:43:24 INFO yarn.Client: Requesting a new application from cluster with 8 NodeManagers 19/01/31 18:43:25 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (61440 MB per container) 19/01/31 18:43:25 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 19/01/31 18:43:25 INFO yarn.Client: Setting up container launch context for our AM 19/01/31 18:43:25 INFO yarn.Client: Setting up the launch environment for our AM container 19/01/31 18:43:25 INFO yarn.Client: Preparing resources for our AM container 19/01/31 18:43:25 INFO security.HadoopFSCredentialProvider: getting token for: hdfs://nameservice-np/user/ggbmgphdpngrp 19/01/31 18:43:25 INFO hdfs.DFSClient: Created token for ggbmgphdpngrp: HDFS_DELEGATION_TOKEN owner=ggbmgphdpngrp@BMEDIA.BAGINT.COM, renewer=yarn, realUser=, issueDate=1548956605075, maxDate=1549561405075, sequenceNumber=1281621, masterKeyId=1013 on ha-hdfs:nameservice-np 19/01/31 18:43:26 INFO hive.metastore: Trying to connect to metastore with URI thrift://gtunxlnu00853.server.arvato-systems.de:9083 19/01/31 18:43:26 INFO hive.metastore: Opened a connection to metastore, current connections: 1 19/01/31 18:43:26 INFO hive.metastore: Connected to metastore. 19/01/31 18:43:27 INFO metadata.Hive: Registering function dateconversion com.infosys.bmg.analytics.Date_Convert 19/01/31 18:43:27 INFO metadata.Hive: Registering function calc_week com.bmg.main.CalcWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function prev_week com.infosys.bmg.analytics.HiveUdfPrevWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function prev_week com.infosys.bmg.analytics.HiveUdfPrevWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function dateconversion com.infosys.bmg.analytics.Date_Convert 19/01/31 18:43:27 INFO metadata.Hive: Registering function date_convert com.infosys.bmg.analytics.Date_Convert 19/01/31 18:43:27 INFO metadata.Hive: Registering function calc_week com.infosys.bmg.analytics.HiveUdfCalcWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function day_of_week com.infosys.bmg.analytics.HiveUdfDayOfWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function beginning_of_fin_week_func com.infosys.bmg.date.Begining_Of_Financial_Week 19/01/31 18:43:27 INFO metadata.Hive: Registering function end_of_fin_week_func com.infosys.bmg.date.End_Of_Financial_Week 19/01/31 18:43:27 INFO metadata.Hive: Registering function dateconversion com.infosys.bmg.analytics.DateConvertFlash 19/01/31 18:43:27 INFO metadata.Hive: Registering function beginning_of_fin_week_func_ada com.infosys.bmg.date.BeginingOfFinancialWeekADA 19/01/31 18:43:27 INFO metadata.Hive: Registering function end_of_fin_week_func_ada com.infosys.bmg.date.EndOfFinancialWeekADA 19/01/31 18:43:27 INFO metadata.Hive: Registering function first_financial_day_func_ada com.infosys.bmg.date.FirstFinancialDayOfYearADA 19/01/31 18:43:27 INFO metadata.Hive: Registering function titleconversionudf com.infosys.bmg.Pr
... View more
09-12-2018
06:17 AM
1 Kudo
For example for 4GB map and reduce memory set this via Hive set mapreduce.map.memory.mb = 4096 set mapreduce.reduce.memory.mb = 4096
... View more
08-31-2018
07:36 PM
You mentioned that "kerberos authentication" and "sentry" are NOT enabled, then you would NOT have any security at all. That username and password you entered are not checked by hive. You should enable kerberos authentication and sentry authorization if you want any security in your cluster.
... View more
07-20-2018
09:24 AM
Hello - I am unable to see any other details. It does pull the file in the first 1-2 hours, and later prevents it from dragging logs from rabbitmq into the hdfs path. After running for 1-2 hours, it throws an error in which no file is being created. So every time, we need to stop the agent and start it again.
... View more