Member since
10-29-2015
128
Posts
31
Kudos Received
4
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2372 | 06-27-2024 02:42 AM | |
| 3579 | 06-24-2022 09:06 AM | |
| 4952 | 01-19-2021 06:56 AM | |
| 60092 | 01-18-2016 06:59 PM |
02-15-2019
02:38 AM
Hello, I would like to know the various activities performed by respective users on Cloudera Manager. For example, which user restarts a service or moves a cluster / service to maintenance mode. Would be great if any one could share some information on this. Thanks snm1523
... View more
Labels:
- Labels:
-
Cloudera Manager
12-04-2018
01:52 AM
Hello TK, Were you able to fix this? Would be great if you could please share the solution as I have the same problem. Thanks Snm1523
... View more
09-19-2018
01:50 AM
Hello @bgooley, Thank you for the guidance. Will attempt the suggested suggestion and come back. Thanks snm1523
... View more
09-11-2018
07:40 AM
Hello, I am trying to deploy a POC cluster using Path A of Cloudera Manager installation documentation. However, it is failing with below message in the logs: cat 5.install-cloudera-manager-server-db-2.log
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package cloudera-manager-server-db-2.x86_64 0:5.15.1-1.cm5151.p0.3.el7 will be installed
--> Processing Dependency: postgresql-server >= 8.4 for package: cloudera-manager-server-db-2-5.15.1-1.cm5151.p0.3.el7.x86_64
--> Finished Dependency Resolution
Error: Package: cloudera-manager-server-db-2-5.15.1-1.cm5151.p0.3.el7.x86_64 (cloudera-manager)
Requires: postgresql-server >= 8.4
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest I also tried to use local repository and point .bin file using --skip_repo_package=1. However, Was unable to find packages for PostgreSQL 8.4 any where, so created repo of 9.6 instead. CM at the first place hunts for PostgreSQL 8.4, and then finds 9.6 in the repo, however, fails at dependency check. It still needs few dependent packages (5-6) to proceed. Before I download each package and put it in repo, seeking suggestions / help from the community on why is the installation with Cloudera repos (installation path A) failing with that message. Thanks snm1523
... View more
Labels:
- Labels:
-
Cloudera Manager
04-05-2018
02:15 AM
Hi Jaimie, Just did a little google and found below things to be set. This may resolve the issue: set hive.tez.container.size=2048;
set hive.tez.java.opts=-Xmx1700m; --set this 80% of hive.tez.container.size Please give it a try if you feel confident on this. Thanks snm1502
... View more
09-28-2017
10:59 PM
3 Kudos
Hello @bgoogley, Thank you for the reply. We have already set the swappiness to 1. Still getting these swappiness warnings. Regards, Snm
... View more
09-28-2017
10:43 PM
Hello Vogone, Thanks for the reply. Yes, swappiness is set to 1. Regards, snm
... View more
09-28-2017
03:16 AM
1 Kudo
Hello, We have been getting regular warning messages of Swap Memory being utilized more than the threshold limit. Currently, we have set swap memory threshold as below: For, HDFS = 100 MB Impala = 30 MB Yarn = 500 MB (Could be +-20%, I dont remember the exact number). Swap Memory usage for each of the component is crossing the threshold and reaches upto 720 MB (in case of Yarn). Because of this, we usually see warnings on our CM dashboard. I know increasing the Swap Memory Threshold could remove these warnings, however, we would prefer rather reducing the usage. Would be great if anyone could suggestion any Memory Tuning options which would reduce the usage of Swap Memory. Also, if there is a best recommendations of setting memory usage threshold, kindly share that as well. Thanks Snm
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache YARN
-
Cloudera Manager
-
HDFS
01-18-2016
06:59 PM
Hello, I was able to find a solution to this issue. You would need to add below property to mapred-site.xml basis on your Hadoop version: Hadoop 1.x: <property>
<name>mapred.job.tracker</name>
<value>localhost:9101</value>
</property> Hadoop 2.x: <property>
<name>mapreduce.jobtracker.address</name>
<value>localhost:9101</value>
</property> Thanks snm1523
... View more
01-15-2016
06:23 PM
Thank you for the prompt response, Harsh. Output of ls -l /tmp/training: [training@localhost ~]$ ls -l /tmp/training/
total 36
-rw-rw-r-- 1 training training 3175 Jan 15 21:14 hive_job_log_training_201601152114_2081367348.txt
-rw-rw-r-- 1 training training 3175 Jan 15 21:15 hive_job_log_training_201601152115_1734130449.txt
-rw-rw-r-- 1 training training 9762 Jan 15 21:15 hive.log
-rw-rw-r-- 1 training training 15704 Jan 15 21:05 training_20160115210505_a8632532-ad5d-40c8-8265-b0147c38655c.log Also, even after setting the HADOOP_CLIENT_OPTS before running the CLI and setting hadoop.tmp.dir=.; I am getting the same error: [training@localhost ~]$ HADOOP_CLIENT_OPTS="-Djava.io.tmpdir=."
[training@localhost ~]$ hive
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Hive history file=/tmp/training/hive_job_log_training_201601152151_801169159.txt
hive> use case_ipl;
OK
Time taken: 1.814 seconds
hive> select * from bowling_no_partition limit 5;
OK
1 RV Uthappa Kolkata 36.4 299 16 18.68 8.15 13.7
2 GJ Maxwell Punjab 14.2 113 3 37.66 7.88 28.6
4 DR Smith Chennai 13.0 135 4 33.75 10.38 19.5
7 SK Raina Chennai 8.0 43 4 10.75 5.37 12.0
8 JP Duminy Delhi 47.5 334 13 25.69 6.98 22.0
Time taken: 0.806 seconds
hive> select name from bowling_no_partition;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Execution log at: /tmp/training/training_20160115215252_e2cea1ec-fc8e-46e4-ac82-6357d76ce6d1.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 1; number of reducers: 0
2016-01-15 21:52:25,188 null map = 0%, reduce = 0%
2016-01-15 21:52:41,173 null map = 100%, reduce = 0%
Ended Job = job_1452912287088_0002 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1452912287088_0002_m_000000 (and more) from job job_1452912287088_0002
Unable to retrieve URL for Hadoop Task logs. Does not contain a valid host:port authority: local
Task with the most failures(4):
-----
Task ID:
task_1452912287088_0002_m_000000
URL:
Unavailable
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":1,"name":"RV Uthappa","team":"Kolkata","overs":36.4,"runs":299,"wickets":16,"avg":18.68,"economy":8.15,"strike_rate":13.7}
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:152)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:147)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":1,"name":"RV Uthappa","team":"Kolkata","overs":36.4,"runs":299,"wickets":16,"avg":18.68,"economy":8.15,"strike_rate":13.7}
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Mkdirs failed to create file:/tmp/training/hive_2016-01-15_21-52-16_114_6229068466610181659/_task_tmp.-ext-10001
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:237)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:477)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:525)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.io.IOException: Mkdirs failed to create file:/tmp/training/hive_2016-01-15_21-52-16_114_6229068466610181659/_task_tmp.-ext-10001
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:434)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:420)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:805)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:685)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:674)
at org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat.getHiveRecordWriter(HiveIgnoreKeyTextOutputFormat.java:80)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:246)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:234)
... 20 more
Execution failed with exit status: 2
Obtaining error information
Task failed!
Task ID:
Stage-1
Logs:
/tmp/training/hive.log
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
hive> set hadoop.tmp.dir=.;
hive> select name from bowling_no_partition;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Execution log at: /tmp/training/training_20160115215252_1aa2c785-f1ba-45db-b3c1-4a2f45a26be6.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 1; number of reducers: 0
2016-01-15 21:52:58,406 null map = 0%, reduce = 0%
2016-01-15 21:53:15,462 null map = 100%, reduce = 0%
Ended Job = job_1452912287088_0003 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1452912287088_0003_m_000000 (and more) from job job_1452912287088_0003
Unable to retrieve URL for Hadoop Task logs. Does not contain a valid host:port authority: local
Task with the most failures(4):
-----
Task ID:
task_1452912287088_0003_m_000000
URL:
Unavailable
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":1,"name":"RV Uthappa","team":"Kolkata","overs":36.4,"runs":299,"wickets":16,"avg":18.68,"economy":8.15,"strike_rate":13.7}
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:152)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:147)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"id":1,"name":"RV Uthappa","team":"Kolkata","overs":36.4,"runs":299,"wickets":16,"avg":18.68,"economy":8.15,"strike_rate":13.7}
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
... 8 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Mkdirs failed to create file:/tmp/training/hive_2016-01-15_21-52-49_840_1514077634964084555/_task_tmp.-ext-10001
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:237)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:477)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:525)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.io.IOException: Mkdirs failed to create file:/tmp/training/hive_2016-01-15_21-52-49_840_1514077634964084555/_task_tmp.-ext-10001
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:434)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:420)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:805)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:685)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:674)
at org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat.getHiveRecordWriter(HiveIgnoreKeyTextOutputFormat.java:80)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:246)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:234)
... 20 more
Execution failed with exit status: 2
Obtaining error information
Task failed!
Task ID:
Stage-1
Logs:
/tmp/training/hive.log
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
hive> Thanks snm1523
... View more
- « Previous
- Next »