Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1344 | 11-26-2019 11:47 PM | |
1301 | 11-25-2019 11:44 AM | |
9470 | 08-07-2019 12:48 AM | |
2171 | 04-17-2019 03:09 AM | |
3483 | 02-18-2019 12:23 AM |
04-09-2019
10:06 AM
You can remove them, if you dont need them. Actually it should be part of your maintenamce, as at least what I know, there is no "auto rotation" or "log delete" on lineage logs.
... View more
03-29-2019
05:59 AM
I would recommend to deploy it via Cloudera Altus Director, you can find some templates here: https://github.com/cloudera/director-scripts Also you can use a fast bootstrap with pre-baked images.
... View more
03-29-2019
01:07 AM
Just curious, is it onprem or in cloud?
... View more
03-29-2019
01:02 AM
1 Kudo
Delete the old unused log directories. The directories are in the form of application_<timestamp>_sequence So you can easily write a script to remove everything older than X days
... View more
03-29-2019
01:00 AM
Based on my observation there is no limit for max number of files, but Kudu runs a compression on old logs. So you can easily remove all the logs *.gz
... View more
03-14-2019
03:57 AM
Hi,
in a two cluster environment where each cluster has its own KDC and between those KDC a trust is configured I cannot read data via Spark. I am missing some property of the spark-shell or spark-submit?
Local HDFS: devhanameservice
Remote HDFS: hanameservice
Running a hdfs ls from dev and listing prod works fine:
[centos@<dev-gateway> ~]$ hdfs dfs -ls hdfs://hanameservice/tmp
Found 6 items
d--------- - hdfs supergroup 0 2019-03-14 11:47 hdfs://hanameservice/tmp/.cloudera_health_monitoring_canary_files
...
But trying to access the remote file in the remote HDFS in spark-shell returns this:
[centos@<dev-gateway> ~]$ spark2-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://<dev-gateway>.eu-west-1.compute.internal:4040
Spark context available as 'sc' (master = yarn, app id = application_1552545238536_0261).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.0.cloudera4
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val t = sc.textFile("hdfs://hanameservice/tmp/external/test/file.csv")
t: org.apache.spark.rdd.RDD[String] = hdfs://hanameservice/tmp/external/test/file.csv MapPartitionsRDD[1] at textFile at <console>:24
scala> t.count()
[Stage 0:> (0 + 1) / 28]19/03/14 11:45:04 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, <worker-node>, executor 28): java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "<worker-node>/10.85.150.22"; destination host is: "<remote-name-node>":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1508)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy18.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:268)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy19.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1324)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1311)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1299)
at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:315)
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:267)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1630)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:339)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:335)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:335)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:784)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:256)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:214)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:381)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
I am able to run mapreduce jobs with this property:
mapreduce.job.hdfs-servers.token-renewal.exclude=hanameservice
Is it something I should put to spark settings? And if yes, how?
Thanks
... View more
Labels:
- Labels:
-
Apache Spark
-
HDFS
03-14-2019
03:35 AM
How are you using the user to group resolution? Have you added your user name on all the nodes? Or are you using LDAP/AD integration? Because it can be that the Hbase node does not know that you are a member of hbase supersuser group
... View more
02-20-2019
05:16 PM
Hi @dbompart I tried to do: spark2-shell --conf spark.yarn.access.hadoopFileSystems=hdfs://ip-10-85-54-144.eu-west-1.compute.internal:8020 but it fails to launch with error: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, Service: 10.85.54.144:8020, Ident: (token for dssetl: HDFS_DELEGATION_TOKEN ..... I had this issue before in distcp, and it was resolved in distcp by setting mapreduce.job.hdfs-servers.token-renewal.exclude=ip-10-85-54-144.eu-west-1.compute.internal How can I set this in spark2-shell too? And how can I point the spark2-shell to use the custom conf files? Thanks
... View more