Member since
01-11-2017
18
Posts
5
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2857 | 01-11-2017 02:19 AM |
11-24-2022
10:01 PM
02-23-2021
04:41 AM
Yes, you can download the hdfs client configuration from Cloudera Manager, but this is not possible always, when you are working on different department or any bureaucratic issue... And if you make any change HDFS configuration, you must download this configuration again. Is not a scalable solution in a big environments, the best solution is working on the same cluster (gateway if is possible), but if you have an external Flume Agents, there no exist a properly and scalable solution I think.
... View more
01-31-2019
10:31 PM
Hi Team , can you please help on below error. connection reset by peer error. Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead. 19/01/31 18:43:18 INFO spark.SparkContext: Running Spark version 2.2.0.cloudera1 19/01/31 18:43:19 WARN spark.SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN). 19/01/31 18:43:19 INFO spark.SparkContext: Submitted application: Prime_CEP_BGFR_1309_Process Rates/Other Errors 19/01/31 18:43:19 INFO spark.SecurityManager: Changing view acls to: ggbmgphdpngrp 19/01/31 18:43:19 INFO spark.SecurityManager: Changing modify acls to: ggbmgphdpngrp 19/01/31 18:43:19 INFO spark.SecurityManager: Changing view acls groups to: 19/01/31 18:43:19 INFO spark.SecurityManager: Changing modify acls groups to: 19/01/31 18:43:19 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ggbmgphdpngrp); groups with view permissions: Set(); users with modify permissions: Set(ggbmgphdpngrp); groups with modify permissions: Set() 19/01/31 18:43:19 INFO util.Utils: Successfully started service 'sparkDriver' on port 50000. 19/01/31 18:43:19 INFO spark.SparkEnv: Registering MapOutputTracker 19/01/31 18:43:19 INFO spark.SparkEnv: Registering BlockManagerMaster 19/01/31 18:43:19 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/01/31 18:43:19 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/01/31 18:43:19 INFO storage.DiskBlockManager: Created local directory at /PBMG/users/ggbmgphdpngrp/prime/cep/tmp/blockmgr-88cc1ce5-d255-4009-9864-25e5f567879e 19/01/31 18:43:19 INFO memory.MemoryStore: MemoryStore started with capacity 6.2 GB 19/01/31 18:43:20 INFO spark.SparkEnv: Registering OutputCommitCoordinator 19/01/31 18:43:20 INFO util.log: Logging initialized @2402ms 19/01/31 18:43:20 INFO server.Server: jetty-9.3.z-SNAPSHOT 19/01/31 18:43:20 INFO server.Server: Started @2475ms 19/01/31 18:43:20 INFO server.AbstractConnector: Started ServerConnector@3ad394e6{HTTP/1.1,[http/1.1]}{0.0.0.0:52000} 19/01/31 18:43:20 INFO util.Utils: Successfully started service 'SparkUI' on port 52000. 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@26f143ed{/jobs,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61a5b4ae{/jobs/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5b69fd74{/jobs/job,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77b325b3{/jobs/job/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e8e8651{/stages,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@271f18d3{/stages/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61e3a1fd{/stages/stage,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@315df4bb{/stages/stage/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5cad8b7d{/stages/pool,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@25243bc1{/stages/pool/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e6ee0bc{/storage,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@467f77a5{/storage/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@420bc288{/storage/rdd,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@308a6984{/storage/rdd/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a34b7b8{/environment,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3be8821f{/environment/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b65e559{/executors,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@74a9c4b0{/executors/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1c05a54d{/executors/threadDump,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5fd9b663{/executors/threadDump/json,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@10567255{/static,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@60b85ba1{/,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@117632cf{/api,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@159e366{/jobs/job/kill,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24528a25{/stages/stage/kill,null,AVAILABLE,@Spark} 19/01/31 18:43:20 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.6.209.22:52000 19/01/31 18:43:20 INFO spark.SparkContext: Added JAR file:/PBMG/users/ggbmgphdpngrp/prime/cep/prime-cep.jar at spark://10.6.209.22:50000/jars/prime-cep.jar with timestamp 1548956600301 19/01/31 18:43:20 INFO util.Utils: Using initial executors = 15, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances 19/01/31 18:43:24 INFO yarn.Client: Requesting a new application from cluster with 8 NodeManagers 19/01/31 18:43:25 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (61440 MB per container) 19/01/31 18:43:25 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 19/01/31 18:43:25 INFO yarn.Client: Setting up container launch context for our AM 19/01/31 18:43:25 INFO yarn.Client: Setting up the launch environment for our AM container 19/01/31 18:43:25 INFO yarn.Client: Preparing resources for our AM container 19/01/31 18:43:25 INFO security.HadoopFSCredentialProvider: getting token for: hdfs://nameservice-np/user/ggbmgphdpngrp 19/01/31 18:43:25 INFO hdfs.DFSClient: Created token for ggbmgphdpngrp: HDFS_DELEGATION_TOKEN owner=ggbmgphdpngrp@BMEDIA.BAGINT.COM, renewer=yarn, realUser=, issueDate=1548956605075, maxDate=1549561405075, sequenceNumber=1281621, masterKeyId=1013 on ha-hdfs:nameservice-np 19/01/31 18:43:26 INFO hive.metastore: Trying to connect to metastore with URI thrift://gtunxlnu00853.server.arvato-systems.de:9083 19/01/31 18:43:26 INFO hive.metastore: Opened a connection to metastore, current connections: 1 19/01/31 18:43:26 INFO hive.metastore: Connected to metastore. 19/01/31 18:43:27 INFO metadata.Hive: Registering function dateconversion com.infosys.bmg.analytics.Date_Convert 19/01/31 18:43:27 INFO metadata.Hive: Registering function calc_week com.bmg.main.CalcWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function prev_week com.infosys.bmg.analytics.HiveUdfPrevWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function prev_week com.infosys.bmg.analytics.HiveUdfPrevWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function dateconversion com.infosys.bmg.analytics.Date_Convert 19/01/31 18:43:27 INFO metadata.Hive: Registering function date_convert com.infosys.bmg.analytics.Date_Convert 19/01/31 18:43:27 INFO metadata.Hive: Registering function calc_week com.infosys.bmg.analytics.HiveUdfCalcWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function day_of_week com.infosys.bmg.analytics.HiveUdfDayOfWeek 19/01/31 18:43:27 INFO metadata.Hive: Registering function beginning_of_fin_week_func com.infosys.bmg.date.Begining_Of_Financial_Week 19/01/31 18:43:27 INFO metadata.Hive: Registering function end_of_fin_week_func com.infosys.bmg.date.End_Of_Financial_Week 19/01/31 18:43:27 INFO metadata.Hive: Registering function dateconversion com.infosys.bmg.analytics.DateConvertFlash 19/01/31 18:43:27 INFO metadata.Hive: Registering function beginning_of_fin_week_func_ada com.infosys.bmg.date.BeginingOfFinancialWeekADA 19/01/31 18:43:27 INFO metadata.Hive: Registering function end_of_fin_week_func_ada com.infosys.bmg.date.EndOfFinancialWeekADA 19/01/31 18:43:27 INFO metadata.Hive: Registering function first_financial_day_func_ada com.infosys.bmg.date.FirstFinancialDayOfYearADA 19/01/31 18:43:27 INFO metadata.Hive: Registering function titleconversionudf com.infosys.bmg.Pr
... View more
10-04-2017
07:48 AM
As suspected there were no available datanodes to place replicas to as I had default replication factor of 3 and 3 datanodes in total. The balancer started working fine after adding a fourth datanode to the cluster.
... View more
09-05-2017
10:36 PM
It's possible, but if you can not upgrade to the last version, you can try my steps to recreate manually. Regards, Marc.
... View more
08-22-2017
05:50 AM
Hi Thanks for the response, I am not running enterpise or BDR, also I have no snapshots. I'm really confused as cloudera now reporting 600 corrupt clocks, as does hdfs dfsadmin -report. However is I run hdfs fsck /, it shows no corrupt block?!?! Any ideas what would cause the difference between "HDFS DFSADMIN - REPORT" and "hdfs fsck /"
... View more
06-02-2017
07:23 AM
Thanks a lot. I have encountered the same problem while upgrading from CDH5.10.0 to CDH5.11.0. Management services (including Navigator) where not able to start. I have followed your instructions and after restart of cloudera agent, the mgmt services were able to start.
... View more
01-11-2017
02:19 AM
Hi cpluplus1, To log into hive server 2 by command line you need this: $ beeline -u "jdbc:hive2://hive_node:10000/;principal=hive/_HOST@ad_domain To log into hive server 2 web UI: http://hive_node:10002/ To run queries from HUE into Hive: https://hue_node:8888/notebook/editor?type=hive With which user are you logging into hue? Maybe you don't have enough privileges to access in hive query editor, can you access with administration user and validate it? Marc.
... View more