Member since
09-01-2016
52
Posts
13
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14984 | 02-26-2018 02:37 PM | |
2532 | 01-25-2017 07:41 PM |
06-02-2023
04:22 PM
1 Kudo
This response is NOT to fix "files with corrupt replicas" but to find and fix files that are completely corrupt, that is that there are not good replicas to recover the files. The warning of files with corrupt replicas is when the file has at least one replica which is corrupt, but the file can still be recovered from the remaining replicas. In this case hdfs fsck /path ... will not show these files because it considere these healty. These files and the corrupted replicas are only reported by the command hdfs dfsadmin -report and as far as I known there is no direct command to fix this. Only way I have found I to wait for the Hadoop cluster to health itself by reallocating the bad replicas from the good ones.
... View more
01-11-2023
09:18 AM
Hello everyone. Since CentOS 8 has been discontinued more than a year ago, and Rocky Linux / Alma Linux have been left occupying the same role as free RHEL's mirror distributions, I would like to know if Cloudera has already a date scheduled in the near future to start supporting any of these distributions as a base operating system for CDP base and related products. Thanks in advance
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
02-07-2019
08:18 PM
We need to configure Superset, running within HDP 3.1, to use existing LDAP. We could not find any proper documentation on how to do this. Are there any defined steps? Thanks in advance.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
01-14-2019
12:35 PM
yarn.scheduler.capacity.maximum-am-resource-percent=0.2 yarn.scheduler.capacity.maximum-applications=10000 yarn.scheduler.capacity.node-locality-delay=40 yarn.scheduler.capacity.root.accessible-node-labels=* yarn.scheduler.capacity.root.acl_administer_queue=* yarn.scheduler.capacity.root.capacity=100 yarn.scheduler.capacity.root.default.acl_submit_applications=* yarn.scheduler.capacity.root.default.capacity=10 yarn.scheduler.capacity.root.default.maximum-capacity=30 yarn.scheduler.capacity.root.default.state=RUNNING yarn.scheduler.capacity.root.default.user-limit-factor=2 yarn.scheduler.capacity.root.queues=Hive,Zeppelin,default yarn.scheduler.capacity.queue-mappings=u:zeppelin:Zeppelin,u:hdfs:Hive,g:dl-analytics-group:Zeppelin yarn.scheduler.capacity.queue-mappings-override.enable=false yarn.scheduler.capacity.root.Hive.acl_administer_queue=* yarn.scheduler.capacity.root.Hive.acl_submit_applications=* yarn.scheduler.capacity.root.Hive.capacity=50 yarn.scheduler.capacity.root.Hive.maximum-capacity=90 yarn.scheduler.capacity.root.Hive.minimum-user-limit-percent=25 yarn.scheduler.capacity.root.Hive.ordering-policy=fair yarn.scheduler.capacity.root.Hive.ordering-policy.fair.enable-size-based-weight=false yarn.scheduler.capacity.root.Hive.priority=10 yarn.scheduler.capacity.root.Hive.state=RUNNING yarn.scheduler.capacity.root.Hive.user-limit-factor=2 yarn.scheduler.capacity.root.Zeppelin.acl_administer_queue=* yarn.scheduler.capacity.root.Zeppelin.acl_submit_applications=* yarn.scheduler.capacity.root.Zeppelin.capacity=40 yarn.scheduler.capacity.root.Zeppelin.maximum-capacity=80 yarn.scheduler.capacity.root.Zeppelin.minimum-user-limit-percent=20 yarn.scheduler.capacity.root.Zeppelin.ordering-policy=fair yarn.scheduler.capacity.root.Zeppelin.ordering-policy.fair.enable-size-based-weight=false yarn.scheduler.capacity.root.Zeppelin.priority=5 yarn.scheduler.capacity.root.Zeppelin.state=RUNNING yarn.scheduler.capacity.root.Zeppelin.user-limit-factor=3 yarn.scheduler.capacity.root.default.minimum-user-limit-percent=25 yarn.scheduler.capacity.root.default.ordering-policy=fair yarn.scheduler.capacity.root.default.ordering-policy.fair.enable-size-based-weight=false yarn.scheduler.capacity.root.default.priority=0 yarn.scheduler.capacity.root.maximum-capacity=100 yarn.scheduler.capacity.root.ordering-policy=priority-utilization yarn.scheduler.capacity.root.priority=0
... View more
01-11-2019
08:37 PM
We have defined several YARN queues. Say that you have queue Q1, where users A and B run Spark processes. If A submits a job that demands all of the queue resources, they are allocated by YARN. Subsequently, when B submits his job, he is affected by resource scarcity. We need to prevent this situation, by assigning resources more evenly between A and B (and all other incoming users), within Q1. We have already set Scheduler to Fair. Can this eager resource allocation behaviour be prevented?
... View more
Labels:
- Labels:
-
Apache YARN
03-07-2018
12:40 PM
For me, proxy settings (no matter if they were set at Intellij, SBT.conf or environment variables), did not work. A couple of considerations that solved this issue (for me at least): - if you use SBT 0.13.16 (not newer that that) - set Use Auto Import Then, no "FAILED DOWNLOADS" messages appear.
... View more
02-26-2018
02:37 PM
A couple of considerations that solved this issue: if you use SBT 0.13.16 (not newer that that) and set Use Auto Import, then no download issues occur.
When you do like this, no "FAILED DOWNLOADS" messages appear.
... View more
02-23-2018
07:27 PM
I am trying to use Scala/Spark from IntelliJ in Windows 7, but it IntelliJ (and SBT command line) fails to download files. I am behind a proxy server. Similar problems were already reported here. Already tried: - SBT versions 0.13.16, 1.0.3 and 1.1.1 - setting proxy properties JAVA_OPTS, SBT_OPTS, sbtconfigtxt -Dhttp.proxyHost=***-Dhttp.proxyPort=***-Dhttp.proxyUser=***-Dhttp.proxyPassword=***-Dhttps.proxyHost=***-Dhttps.proxyPort=***-Dhttps.proxyUser=***-Dhttps.proxyPassword=*** without success - verified the issue in SBT: d:\Users\user1>sbt.bat
[info] Loading project definition from D:\Users\user1\project
[info] Updating {file:/D:/Users/user1/project/}user1-build...
[warn] [FAILED ] org.apache.logging.log4j#log4j-core;2.8.1!log4j-core.jar(t
est-jar): typesafe-ivy-releases: unable to get resource for org.apache.logging.l
og4j#log4j-core;2.8.1: res=https://repo.typesafe.com/typesafe/ivy-releases/org.a
pache.logging.log4j/log4j-core/2.8.1/test-jars/log4j-core-tests.jar: java.io.IOE
xception: Failed to authenticate with proxy (353ms)
[warn] [FAILED ] org.apache.logging.log4j#log4j-core;2.8.1!log4j-core.jar(t
est-jar): sbt-plugin-releases: unable to get resource for org.apache.logging.log
4j#log4j-core;2.8.1: res=https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases
/org.apache.logging.log4j/log4j-core/2.8.1/test-jars/log4j-core-tests.jar: java.
io.IOException: Failed to authenticate with proxy (8ms)
[warn] [FAILED ] org.apache.logging.log4j#log4j-core;2.8.1!log4j-core.jar(t
est-jar): public: unable to get resource for org/apache/logging/log4j#log4j-core
;2.8.1: res=https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-core/2
.8.1/log4j-core-2.8.1-tests.jar: java.io.IOException: Failed to authenticate wit
h proxy (8ms)
[warn] Detected merged artifact: [FAILED ] org.apache.logging.log4j#log4j-c
ore;2.8.1!log4j-core.jar(test-jar): (0ms).
[warn] ==== typesafe-ivy-releases: tried
[warn] ==== sbt-plugin-releases: tried
[warn] https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/org.apache.logg
ing.log4j/log4j-core/2.8.1/test-jars/log4j-core-tests.jar
[warn] ==== local: tried
[warn] d:\Users\user1\.ivy2\local\org.apache.logging.log4j\log4j-core\2.8.1\
test-jars\log4j-core-tests.jar
[warn] ==== public: tried
[warn] https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-core/2.8.
1/log4j-core-2.8.1-tests.jar
[warn] ==== local-preloaded-ivy: tried
[warn] d:\Users\user1\.sbt\preloaded\org.apache.logging.log4j\log4j-core\2.8
.1\test-jars\log4j-core-tests.jar
[warn] ==== local-preloaded: tried
[warn] file:/d:/Users/user1/.sbt/preloaded/org/apache/logging/log4j/log4j-co
re/2.8.1/log4j-core-2.8.1-tests.jar
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: FAILED DOWNLOADS ::
[warn] :: ^ see resolution messages for details ^ ::
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[warn] :: org.apache.logging.log4j#log4j-core;2.8.1!log4j-core.jar(test-jar)
[warn] ::::::::::::::::::::::::::::::::::::::::::::::
[error] sbt.librarymanagement.ResolveException: download failed: org.apache.logg
ing.log4j#log4j-core;2.8.1!log4j-core.jar(test-jar)
[error] at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve
(IvyActions.scala:331)
[error] at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEit
her$1(IvyActions.scala:205)
[error] at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withMod
ule$1(Ivy.scala:229)
[error] at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.
scala:190)
[error] at sbt.internal.librarymanagement.IvySbt.sbt$internal$libraryman
agement$IvySbt$$action$1(Ivy.scala:70)
[error] at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:
77)
[error] at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95)
[error] at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withCh
annelRetries$1(Locks.scala:80)
[error] at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Lock
s.scala:99)
[error] at xsbt.boot.Using$.withResource(Using.scala:10)
[error] at xsbt.boot.Using$.apply(Using.scala:9)
[error] at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scal
a:60)
[error] at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50)
[error] at xsbt.boot.Locks$.apply0(Locks.scala:31)
[error] at xsbt.boot.Locks$.apply(Locks.scala:28)
[error] at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.s
cala:77)
[error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:185)
[error] at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:182)
[error] at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.s
cala:228)
[error] at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyAc
tions.scala:190)
[error] at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyD
ependencyResolution.scala:20)
[error] at sbt.librarymanagement.DependencyResolution.update(DependencyR
esolution.scala:56)
[error] at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.s
cala:38)
[error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(Libr
aryManagement.scala:91)
[error] at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68)
[error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(Libr
aryManagement.scala:104)
[error] at scala.util.control.Exception$Catch.apply(Exception.scala:224)
[error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(Libr
aryManagement.scala:104)
[error] at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adap
ted(LibraryManagement.scala:87)
[error] at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149)
[error] at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagemen
t.scala:118)
[error] at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2353)
[error] at scala.Function1.$anonfun$compose$1(Function1.scala:44)
[error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFuncti
ons.scala:42)
[error] at sbt.std.Transform$$anon$4.work(System.scala:64)
[error] at sbt.Execute.$anonfun$submit$2(Execute.scala:257)
[error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.sc
ala:16)
[error] at sbt.Execute.work(Execute.scala:266)
[error] at sbt.Execute.$anonfun$submit$1(Execute.scala:257)
[error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(Con
currentRestrictions.scala:167)
[error] at sbt.CompletionService$$anon$2.call(CompletionService.scala:32
)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.Executors$RunnableAdapter.call(Executors
.java:511)
[error] at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolE
xecutor.java:1149)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPool
Executor.java:624)
[error] at java.lang.Thread.run(Thread.java:748)
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? [error] (*:update)
sbt.librarymanagement.ResolveException: download failed: org.apache.logging.log
4j#log4j-core;2.8.1!log4j-core.jar(test-jar)
... View more
Labels:
- Labels:
-
Apache Spark
12-11-2017
02:48 PM
When you use Spark from a Zeppelin notebook, and at the end you isse a context stop, sc.stop() it affects the context of other running Zeppeling notebooks, making them fail because of no Spark active context. They seem to be sharing the same Spark context. How can this be avoided?
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
10-24-2017
03:54 PM
I am trying to create a new policy in Ranger. When you click on Add, 'Error creating policy' is shown. captura.jpg This is what /var/log/ranger/admin/xa_portal.log shows: 2017-10-24 15:51:28,710 [http-bio-6080-exec-12] INFO org.apache.ranger.common.RESTErrorUtil (RESTErrorUtil.java:345) - Request failed. loginId=holger_gov, logMessage=User 'holger_gov' does not have delegated-admin privilege on given resourcesjavax.ws.rs.WebApplicationException at org.apache.ranger.common.RESTErrorUtil.createRESTException(RESTErrorUtil.java:337) Connection test to Ranger Admin DB runs ok (jdbc:postgresql://localhost:5432/ranger). This is a 2.6.1 sandbox based environment. What can be causing this issue?
... View more
Labels:
- Labels:
-
Apache Ranger