Member since
04-04-2022
64
Posts
4
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
244 | 12-08-2022 08:57 PM | |
198 | 12-07-2022 02:34 AM | |
258 | 12-06-2022 05:34 AM | |
314 | 12-01-2022 02:17 AM | |
294 | 11-25-2022 06:09 AM |
03-08-2023
06:28 AM
to resolve problem you need to create folder sudo mkdir /var/lib/hadoop-yarn/ sudo chmod +077 /var/lib/hadoop-yarn/ sudo chown yarn:hadoop /var/lib/hadoop-yarn/
... View more
02-27-2023
11:34 PM
@Ganeshk Yes, we do not have any flag equivalent to -v in beeline. However, if we are looking for the command itself, you could find it in the line stating "Compiling command" or "Executing command"
... View more
01-10-2023
08:48 AM
@hanumanth Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information @ AsimShaikh has requested? Thanks
... View more
01-03-2023
09:35 AM
I am getting errors while running a simple MR job 2023-01-02 23:29:42,462 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public resource: { hdfs://hp8300one:8020/user/yarn/mapreduce/mr-framework/3.0.0-cdh6.3.4-mr-framework.tar.gz, 1672446065301, ARCHIVE, null }
2023-01-02 23:29:42,462 ERROR org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Local path for public localization is not found. May be disks failed.
org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:400)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:152)
at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:589)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$PublicLocalizer.addResource(ResourceLocalizationService.java:883)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:781)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.handle(ResourceLocalizationService.java:723)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
at java.lang.Thread.run(Thread.java:750)
2023-01-02 23:29:42,462 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1672729727095_0002_01_000001
2023-01-02 23:29:42,463 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Localizer failed for container_1672729727095_0002_01_000001
org.apache.hadoop.util.DiskChecker$DiskErrorException: No space available in any of the local directories.
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:400)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:152)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:133)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:117)
at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:584)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1205)
2023-01-02 23:29:42,463 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerImpl: Container container_1672729727095_0002_01_000001 transitioned from LOCALIZING to LOCALIZATION_FAILED
2023-01-02 23:29:42,463 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalResourcesTrackerImpl: Container container_1672729727095_0002_01_000001 sent RELEASE event on a resource request { hdfs://hp8300one:8020/user/yarn/mapreduce/mr-framework/3.0.0-cdh6.3.4-mr-framework.tar.gz, 1672446065301, ARCHIVE, null } not present in cache.
2023-01-02 23:29:42,463 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Execution exception when running task in DeletionService #0
2023-01-02 23:29:42,464 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread DeletionService #0:
java.lang.NullPointerException: path cannot be null
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:270)
at org.apache.hadoop.fs.FileContext.delete(FileContext.java:768)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.deletion.task.FileDeletionTask.run(FileDeletionTask.java:109)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
2023-01-02 23:29:42,464 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=sanjay OPERATION=Container Finished - Failed TARGET=ContainerImpl RESULT=FAILURE DESCRIPTION=Container failed with state: LOCALIZATION_FAILED APPID=application_1672729727095_0002 CONTAINERID=container_1672729727095_0002_01_000001
... View more
01-03-2023
09:03 AM
Hi @Tellyou6bu6 Did you got chance to check capacity-schedular.xml file?
... View more
01-03-2023
09:02 AM
@aval Can you please confirm on which version of Cloudera you are currently on ? Basically HWC is required when you want to access Managed tables via Spark. Also use of spark.sql.hive.hwc.execution.mode is deprecated as per CDP 7.1.7 https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/integrating-hive-and-bi/topics/hive-hwc-reader-mode.html
... View more
01-03-2023
08:58 AM
@myzard Can you please upload full error message which is above Traceback?
... View more
01-03-2023
08:55 AM
@Mostafa12345678 Can you share full stack trace for review?
... View more
01-02-2023
06:23 AM
Hi @rudi101101 Can you please confirm if you are using any timeout settings from PowerBI end?
... View more
01-02-2023
06:02 AM
Hi @SantoshB You can see such messages when you have reached user factor limit/resource limit on queue level. You can check tune user-limit factor or check queue utilization to schedule applications accordingly. Also seems application has failed with exit-code 13, can you please share YARN trace to identify reason for failure?
... View more
12-29-2022
05:36 AM
@reca Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
12-18-2022
05:16 AM
@drgenious This is an OS-level issue that will need to be addressed at the OS level by the system admin. The bottom line here is that thrift-0.9.2 needs to be uninstalled There are various things that could be happening:
1) Multiple python versions.
2) Multiple pip versions.
3) Broken installation. Solution: 1
- You can try to create the Python virtual environment to connect to impala-shell
virtualenv venv -p python2
cd venv
source bin/activate
(venv) impala-shell Solution : 2 (i) Remove easy-install.pth files available in,
/usr/lib/python2.6/site-packages/
/usr/lib64/python2.6/site-packages/
(ii) Try running impala-shell If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped.
... View more
12-12-2022
01:25 AM
@SchmidtS If In case you are not currently using the latest version of the Impala ODBC driver, please update the driver to the latest version (2.6.16 currently). Please note that 2.6.16 supports CDH6 so there should not be any incompatibility.
You can download the latest version of Impala ODBC driver in the below link:
https://www.cloudera.com/downloads/connectors/impala/odbc/2-6-16.html
... View more
12-08-2022
08:57 PM
1 Kudo
Hi You can check below document for better understanding of how queue manager works. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/yarn-allocate-resources/topics/yarn-set-user-limits.html Also if you are Cloudera customer you can check below KB's https://my.cloudera.com/knowledge/How-does-user-limit-factor-impact-capacity-scheduler?id=270996 https://my.cloudera.com/knowledge/How-is-Max-Schedulable-Applications-Per-User-calculated-by?id=271520 https://my.cloudera.com/knowledge/Tuning-the-YARN-Capacity-Scheduler?id=276877 Hope this answers your question, please mark solution as accepted if it resolves your issue and hit like.
... View more
12-08-2022
07:18 PM
Thank you very much for the support. I see the log I want. I also want to ask about, Any have a way to create a workflow with Oozie editor to send these logs to someone? For everyday ˈou(ə)r của chúng tôi đại từ của chúng ta, của chúng tôi
... View more
12-07-2022
12:47 PM
@hanumanth Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
12-07-2022
02:34 AM
@neters Hope this answers your query please mark solution as accepted and hit like if you find it helpful
... View more
12-02-2022
09:44 AM
@sss123 Are you able to run spark commands via spark-shell spark-submit?
... View more
12-01-2022
02:18 AM
Hi @d_liu Could be you took restart of HS2 but didn't logged in again from Hue, could be it is using same session. But HS2 restart should solve your problem.
... View more
11-28-2022
11:09 PM
Thanks a lot. This " yarn application -updatePriority 10 -appId application_xxxx_xx " seems a config of yarn. It does not work for spark 2.x in CDH 6.3.2 either. Does it the same reason which means the 'Application Priority' must match the yarn version with spark version?
... View more
11-28-2022
01:50 AM
@WuHua Can you please try to add this property and re-run query? Add --control_service_queue_mem_limit=200M to "Impala Daemon Command Line Argument Advanced Configuration Snippet (Safety Valve)" Thanks !
... View more
11-21-2022
02:11 AM
2 Kudos
Hi @Tellyou6bu6 If you are installing trail version of CDP , It will use embedded Postgre for installation. Installing a Trial Cluster In this procedure, Cloudera Manager automates the installation of the Oracle JDK, Cloudera Manager Server, embedded PostgreSQL database, Cloudera Manager Agent, Runtime, and managed service software on cluster hosts. Cloudera Manager also configures databases for the Cloudera Manager Server and Hive Metastore and optionally for Cloudera Management Service roles." You can check below link for detail explanation. https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/installation/topics/cdpdc-trial-installation.html Regards, Asim
... View more
11-18-2022
05:46 AM
@shamly Can you share full stack trace?
... View more
11-17-2022
02:11 AM
Hi @tencentemr w as your question answered ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
11-17-2022
02:09 AM
@pankshiv1809 w as your question answered ? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
11-02-2022
09:32 PM
hi Asim, Thanks for the reply. I can see the log using the command while the hadoop cluster is running. After I reboot the cluster, the history logs disappead on the page. The config is as below: <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 历史服务器端地址 --> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop201:10020</value> </property> <!-- 历史服务器web端地址 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop201:19888</value> </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/opt/module/hadoop-3.1.3/logs/his_log/done</value> <description>MR JobHistory Server管理的日志的存放位置,默认:/mr-history/done</description> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/opt/module/hadoop-3.1.3/logs/his_log</value> <description>MapReduce作业产生的日志存放位置,默认值:/mr-history/tmp</description> </property> <property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/opt/module/hadoop-3.1.3/logs/mr-stage-his</value> <description></description> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name> <value>3600</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/opt/module/hadoop-3.1.3/logs/resource_manager_logs</value> </property> <!-- 设置日志聚集服务器地址 --> <property> <name>yarn.log.server.url</name> <value>http://hadoop201:19888/jobhistory/logs</value> </property> <!-- 设置日志保留时间为7天 --> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>5184000</value> </property> much appreciated for the help.
... View more
11-01-2022
05:03 AM
You have sample code which you can share?
... View more