Member since
05-09-2017
107
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3157 | 03-19-2020 01:30 PM | |
16247 | 11-27-2019 08:22 AM | |
8781 | 07-05-2019 08:21 AM | |
15382 | 09-25-2018 12:09 PM | |
5821 | 08-10-2018 07:46 AM |
05-02-2018
06:24 AM
In our case the reducers were failing with OOM issue, so we first increased the reducer memory (mapreduce.reduce.memory.mb) and the mapreduce.reduce.java.opts . After a few days the job again failed. Sp we had up to keep the existing memory and increase the number of reducers from 40 to 60. This resolved our issue and we havent seen a failure since then. We cannot keep increasing memory for reducers which could cause other issues. A lower number of reducers will create fewer, but larger, output files. A good rule of thumb is to tune the number of reducers so that the output files are at least a half a block size. If the reducers are completing faster and generating small files then we have too many reducers which was not in our case.
... View more
03-19-2018
07:11 PM
@saranvisa After increasing reducer heap and opts the job worked for a few days and now we are seeing this issue again. where not a single reducer completes and we are seeing job failure after 4 hrs and ALL reducers are failed. Not a single one completes failed reducer log: dfs.DFSClient: Slow waitForAckedSeqno took 38249ms (threshold=30000ms). File being written: /user/hadoop/normalization/6befd9a02400013179aba889/16cb62ff-463a-448b-b1d3-1cf5bb254466/_temporary/1/_temporary/attempt_1517244318452_37939_r_000028_0/custom_attribute_dir/part-00028.gz, block: BP-71764089-10.239.121.82-1481226593627:blk_1103397861_29724995, Write pipeline datanodes: [DatanodeInfoWithStorage[10.239.121.39:50010,DS-15b1c936-e838-41a2-ab40-7889aab95982,DISK], DatanodeInfoWithStorage[10.239.121.21:50010,DS-d5d914b6-6886-443b-9e39-8347c24cc9b7,DISK], DatanodeInfoWithStorage[10.239.121.56:50010,DS-63498815-70ea-48e2-b701-f0c439e38711,DISK]] 2018-03-19 23:54:17,315 WARN [main] org.apache.hadoop.hdfs.DFSClient: Slow waitForAckedSeqno took 35411ms (threshold=30000ms). File being written: /user/hadoop/normalization/6befd9a02400013179aba889/16cb62ff-463a-448b-b1d3-1cf5bb254466/_temporary/1/_temporary/attempt_1517244318452_37939_r_000028_0/documents_dir/part-00028.gz, block: BP-71764089-10.239.121.82-1481226593627:blk_1103400051_29727493, Write pipeline datanodes: [DatanodeInfoWithStorage[10.239.121.39:50010,DS-15b1c936-e838-41a2-ab40-7889aab95982,DISK], DatanodeInfoWithStorage[10.239.121.176:50010,DS-ae2d35e1-7a7e-44dc-9016-1d11881d49cc,DISK], DatanodeInfoWithStorage[10.239.121.115:50010,DS-86b207ef-b8ce-4a9f-9f6f-ddc182695296,DISK]] 2018-03-19 23:54:51,983 WARN [main] org.apache.hadoop.hdfs.DFSClient: Slow waitForAckedSeqno took 34579ms (threshold=30000ms). File being written: /user/hadoop/normalization/6befd9a02400013179aba889/16cb62ff-463a-448b-b1d3-1cf5bb254466/_temporary/1/_temporary/attempt_1517244318452_37939_r_000028_0/form_path_dir/part-00028.gz, block: BP-71764089-10.239.121.82-1481226593627:blk_1103400111_29727564, Write pipeline datanodes: [DatanodeInfoWithStorage[10.239.121.39:50010,DS-15b1c936-e838-41a2-ab40-7889aab95982,DISK], DatanodeInfoWithStorage[10.239.121.176:50010,DS-ae2d35e1-7a7e-44dc-9016-1d11881d49cc,DISK], DatanodeInfoWithStorage[10.239.121.21:50010,DS-d5d914b6-6886-443b-9e39-8347c24cc9b7,DISK]] 2018-03-19 23:55:47,506 WARN [main] org.apache.hadoop.hdfs.DFSClient: Slow waitForAckedSeqno took 55388ms (threshold=30000ms). File being written: /user/hadoop/normalization/6befd9a02400013179aba889/16cb62ff-463a-448b-b1d3-1cf5bb254466/_temporary/1/_temporary/attempt_1517244318452_37939_r_000028_0/media_hr_dir/part-00028.gz, block: BP-71764089-10.239.121.82-1481226593627:blk_1103400160_29727615, Write pipeline datanodes: [DatanodeInfoWithStorage[10.239.121.39:50010,DS-15b1c936-e838-41a2-ab40-7889aab95982,DISK], DatanodeInfoWithStorage[10.239.121.176:50010,DS-ae2d35e1-7a7e-44dc-9016-1d11881d49cc,DISK], DatanodeInfoWithStorage[10.239.121.56:50010,DS-63498815-70ea-48e2-b701-f0c439e38711,DISK]] 2018-03-19 23:55:47,661 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.regex.Pattern.matcher(Pattern.java:1093) at java.lang.String.replaceAll(String.java:2223) at com.xxx.ci.acs.extract.CXAService$myReduce.parseEvent(CXAService.java:1589) at com.xxx.ci.acs.extract.CXAService$myReduce.reduce(CXAService.java:915) at com.xxx.ci.acs.extract.CXAService$myReduce.reduce(CXAService.java:233) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 2018-03-19 23:55:47,763 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping ReduceTask metrics system... 2018-03-19 23:55:47,763 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system stopped. 2018-03-19 23:55:47,763 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ReduceTask metrics system shutdown complete. What are the other tuning parameters we can try ?
... View more
03-02-2018
08:11 AM
Thank you. Yes this worked after the certs were added to jsscacerts
... View more
02-02-2018
07:04 AM
1 Kudo
I am able to sucessfully put a file in a non-encrypted zone.
when i try to put a file to an encrypted zone i see this error. The file however is copied to the encrypted zone.
desind@xxxx:~#> hdfs dfs -put users.txt /ccnd/test
18/02/01 06:54:19 WARN kms.LoadBalancingKMSClientProvider: KMS provider at [https://xxxx.com:16000/kms/v1/] threw an IOException [Key retrieval failed.]!!
Caused by: java.lang.NullPointerException: No KeyVersion exists for key 'testTLS1' at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:231) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:294) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:511) at org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension$EncryptedQueueRefiller.fillQueueForKey(EagerKeyGen eratorKeyProviderCryptoExtension.java:76) at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:246) at org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:240) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) ... 54 more 2018-02-02 09:52:50,353 WARN org.apache.hadoop.crypto.key.kms.server.KMS: User hdfs/xxxx.com@VSP.SAS.COM (auth:KERBEROS) request GET https://xxxx.com:16000/kms/v1/key/testTLS1/_eek?num_keys=150&eek_op=generate caused exception.
Can someone advise where to check ?
We have kerberos and SSL enabled in the cluster.
... View more
01-26-2018
12:18 PM
This setting is a global one. Are there options at a pipeline level ?
... View more
12-22-2017
06:57 AM
After enabling SSL i am not able to login to Navigator with my own username/password. If i try as admin it works. Error in the Navigator Metadata logs show this Caused by: javax.naming.CommunicationException: simple bind failed: ldap.vsp.sas.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target] at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:219) at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2788) at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:210) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:153) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:83) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313) at javax.naming.InitialContext.init(InitialContext.java:244) at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:154) at org.springframework.security.ldap.authentication.ad.ActiveDirectoryLdapAuthenticationProvider$ContextFactory.createContext(ActiveDirectoryLdapAuthenticationProvider.java:347) at org.springframework.security.ldap.authentication.ad.ActiveDirectoryLdapAuthenticationProvider.bindAsUser(ActiveDirectoryLdapAuthenticationProvider.java:181) ... 57 more Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302) at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1514) at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216) at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026) at sun.security.ssl.Handshaker.process_record(Handshaker.java:961) at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062) at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375) at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:747) at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at com.sun.jndi.ldap.Connection.writeRequest(Connection.java:426) at com.sun.jndi.ldap.Connection.writeRequest(Connection.java:399) at com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:359) at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:214) ... 69 more Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387) at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292) at sun.security.validator.Validator.validate(Validator.java:260) at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1496) ... 82 more Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141) at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126) at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280) at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382) ... 88 more
... View more
Labels:
- Labels:
-
Cloudera Navigator
12-14-2017
06:14 AM
This issue was resolved by changing the HIVE.DBS to point to nameservice1. This is the equivalent in Oracle Database. SQL> update HIVE.DBS set DB_LOCATION_URI = 'hdfs://nameservice1/user/hive/warehouse' where NAME='default'; 1 row updated. SQL> select DB_LOCATION_URI from HIVE.DBS where NAME = 'default'; DB_LOCATION_URI -------------------------------------------------------------------------------------------------------------------------------------------- hdfs://nameservice1/user/hive/warehouse SQL> commit; Commit complete Using Cloudera manager to make this update did not work in 5.13.0
... View more
11-14-2017
06:43 AM
I am not sure if this was resolved but after i upgraded from 5.11.1 to 5.13.0 i am seeing this error in spark2-shell scala> spark.sqlContext.sql("CREATE TABLE IF NOT EXISTS default.employee_test123(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'") java.lang.IllegalArgumentException: Wrong FS: hdfs://abc23.xxx.com:8020/user/hive/warehouse/employee_test123, expected: hdfs://nameservice1 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:662) at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:482) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:231) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:200) I have followed the instructions in https://www.cloudera.com/documentation/enterprise/5-11-x/topics/cdh_hag_hdfs_ha_cdh_components_config.html#topic_2_6_3 but still seeing the same issue.
... View more
11-09-2017
12:32 PM
1 Kudo
Hi, HDFS error while running creating table from spark2 shell desind@fsad145:~#> spark2-shell Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 17/11/09 14:22:43 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0 Spark context Web UI available at http://******:4040 Spark context available as 'sc' (master = yarn, app id = application_1510255229586_0001). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.1.0.cloudera2 /_/ Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67) Type in expressions to have them evaluated. Type :help for more information. scala> sql("CREATE TABLE IF NOT EXISTS default.employee_test(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'") java.lang.IllegalArgumentException: Wrong FS: hdfs://******:8020/user/hive/warehouse/employee_test, expected: hdfs://nameservice1 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:662) at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:482) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply$mcV$sp(HiveExternalCatalog.scala:231) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:200) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createTable$1.apply(HiveExternalCatalog.scala:200) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:98) at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:200) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:248) at org.apache.spark.sql.execution.command.CreateTableCommand.run(tables.scala:116) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87) at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:600) ... 48 elided
... View more
Labels:
- Labels:
-
Apache Spark
-
HDFS
11-06-2017
01:43 PM
Hi, I have just installed spark2 in CDH 5.13.0 after enabling spark.authenticate i get an error when running a spark job PYSPARK_PYTHON=/sso/sfw/cloudera/parcels/Anaconda/bin/python spark2-submit pyspark_script.py 17/11/06 16:35:53 INFO spark.SparkContext: Running Spark version 2.1.0.cloudera2 17/11/06 16:35:53 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0 17/11/06 16:35:54 INFO spark.SecurityManager: Changing view acls to: desind 17/11/06 16:35:54 INFO spark.SecurityManager: Changing modify acls to: desind 17/11/06 16:35:54 INFO spark.SecurityManager: Changing view acls groups to: 17/11/06 16:35:54 INFO spark.SecurityManager: Changing modify acls groups to: 17/11/06 16:35:54 ERROR spark.SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: Error: a secret key must be specified via the spark.authenticate.secret config at org.apache.spark.SecurityManager.generateSecretKey(SecurityManager.scala:457) at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:229) at org.apache.spark.SparkEnv$.create(SparkEnv.scala:236) at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:174) at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:258) at org.apache.spark.SparkContext.<init>(SparkContext.scala:435) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:236) at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:745) 17/11/06 16:35:54 INFO spark.SparkContext: Successfully stopped SparkContext Traceback (most recent call last): File "/home/desind/pyspark_script.py", line 3, in <module> sc = SparkContext(master='local[*]', appName='MyPySparkScript') File "/sso/sfw/cloudera/parcels/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658/lib/spark2/python/lib/pyspark.zip/pyspark/context.py", line 118, in __init__ File "/sso/sfw/cloudera/parcels/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658/lib/spark2/python/lib/pyspark.zip/pyspark/context.py", line 182, in _do_init File "/sso/sfw/cloudera/parcels/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658/lib/spark2/python/lib/pyspark.zip/pyspark/context.py", line 249, in _initialize_context File "/sso/sfw/cloudera/parcels/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658/lib/spark2/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1401, in __call__ File "/sso/sfw/cloudera/parcels/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658/lib/spark2/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext. : java.lang.IllegalArgumentException: Error: a secret key must be specified via the spark.authenticate.secret config
... View more
Labels:
- Labels:
-
Apache Spark