Member since
02-23-2016
48
Posts
7
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
967 | 10-27-2016 05:49 AM | |
8170 | 08-25-2016 07:02 AM | |
1334 | 04-22-2016 04:31 AM | |
531 | 04-22-2016 03:53 AM | |
7071 | 03-01-2016 10:38 PM |
11-08-2016
06:33 AM
Hi Sagar, thank you for your hints but I can't test it because my Cluster is destroyed. 🙂 Klaus
... View more
10-28-2016
06:56 AM
Hello; the client Installation process failed with this error on all nodes: stderr: /var/lib/ambari-agent/data/errors-3460.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 75, in <module>
OozieClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 36, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 404, in install_packages
Package(name)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 49, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/apt.py", line 53, in wrapper
return function_to_decorate(self, name, *args[2:])
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/apt.py", line 97, in install_package
self.checked_call_until_not_locked(cmd, sudo=True, env=INSTALL_CMD_ENV, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 72, in checked_call_until_not_locked
return self.wait_until_not_locked(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 80, in wait_until_not_locked
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/apt-get -q -o Dpkg::Options::=--force-confdef --allow-unauthenticated --assume-yes install 'oozie-2-3-.*'' returned 100. Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package oozie-2-3-.*
E: Couldn't find any package by regex 'oozie-2-3-.*' stdout: /var/lib/ambari-agent/data/output-3460.txt
2016-10-28 08:03:58,249 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-10-28 08:03:58,250 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-10-28 08:03:58,250 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-10-28 08:03:58,271 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-10-28 08:03:58,271 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-10-28 08:03:58,295 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-10-28 08:03:58,295 - Ensuring that hadoop has the correct symlink structure
2016-10-28 08:03:58,295 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-10-28 08:03:58,296 - Group['spark'] {}
2016-10-28 08:03:58,297 - Group['hadoop'] {}
2016-10-28 08:03:58,297 - Group['users'] {}
2016-10-28 08:03:58,297 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,298 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,298 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-10-28 08:03:58,299 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,299 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-10-28 08:03:58,300 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,300 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,301 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-10-28 08:03:58,301 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,302 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,303 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,303 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,304 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,304 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,305 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-10-28 08:03:58,305 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-10-28 08:03:58,306 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-10-28 08:03:58,316 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-10-28 08:03:58,317 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-10-28 08:03:58,317 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-10-28 08:03:58,318 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-10-28 08:03:58,322 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-10-28 08:03:58,322 - Group['hdfs'] {}
2016-10-28 08:03:58,322 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-10-28 08:03:58,323 - Directory['/etc/hadoop'] {'mode': 0755}
2016-10-28 08:03:58,334 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-10-28 08:03:58,335 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-10-28 08:03:58,350 - Repository['HDP-2.4'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.4.0.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '{{package_type}} {{base_url}} {{components}}', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-10-28 08:03:58,354 - File['/tmp/tmpgzSjgM'] {'content': 'deb http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.4.0.0 HDP main'}
2016-10-28 08:03:58,355 - Writing File['/tmp/tmpgzSjgM'] because contents don't match
2016-10-28 08:03:58,355 - File['/tmp/tmpKdatW3'] {'content': StaticFile('/etc/apt/sources.list.d/HDP.list')}
2016-10-28 08:03:58,356 - Writing File['/tmp/tmpKdatW3'] because contents don't match
2016-10-28 08:03:58,358 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu12', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '{{package_type}} {{base_url}} {{components}}', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-10-28 08:03:58,360 - File['/tmp/tmpgWh4hC'] {'content': 'deb http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu12 HDP-UTILS main'}
2016-10-28 08:03:58,360 - Writing File['/tmp/tmpgWh4hC'] because contents don't match
2016-10-28 08:03:58,360 - File['/tmp/tmpy240ZL'] {'content': StaticFile('/etc/apt/sources.list.d/HDP-UTILS.list')}
2016-10-28 08:03:58,360 - Writing File['/tmp/tmpy240ZL'] because contents don't match
2016-10-28 08:03:58,362 - Package['unzip'] {}
2016-10-28 08:03:58,382 - Skipping installation of existing package unzip
2016-10-28 08:03:58,383 - Package['curl'] {}
2016-10-28 08:03:58,402 - Skipping installation of existing package curl
2016-10-28 08:03:58,402 - Package['hdp-select'] {}
2016-10-28 08:03:58,423 - Skipping installation of existing package hdp-select
2016-10-28 08:03:58,695 - Package['zip'] {}
2016-10-28 08:03:58,719 - Skipping installation of existing package zip
2016-10-28 08:03:58,720 - Package['mysql-connector-java'] {}
2016-10-28 08:03:58,739 - Skipping installation of existing package mysql-connector-java
2016-10-28 08:03:58,739 - Package['extjs'] {}
2016-10-28 08:03:58,759 - Skipping installation of existing package extjs
2016-10-28 08:03:58,759 - Package['oozie-2-3-.*'] {}
2016-10-28 08:03:58,779 - Installing package oozie-2-3-.* ('/usr/bin/apt-get -q -o Dpkg::Options::=--force-confdef --allow-unauthenticated --assume-yes install 'oozie-2-3-.*'')
2016-10-28 08:03:59,244 - Execution of '['/usr/bin/apt-get', '-q', '-o', 'Dpkg::Options::=--force-confdef', '--allow-unauthenticated', '--assume-yes', 'install', u'oozie-2-3-.*']' returned 100. Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package oozie-2-3-.*
E: Couldn't find any package by regex 'oozie-2-3-.*'
2016-10-28 08:03:59,245 - Failed to install package oozie-2-3-.*. Executing `/usr/bin/apt-get update -qq`
2016-10-28 08:04:32,864 - Retrying to install package oozie-2-3-.* Doing this manually the version 2.4.0.0.169 is available:: # apt-get install oozie\*
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'oozie-2-4-0-0-169' for regex 'oozie*'
Note, selecting 'oozie-server' for regex 'oozie*'
Note, selecting 'oozie-2-4-0-0-169-server' for regex 'oozie*'
Note, selecting 'oozie-client' for regex 'oozie*'
Note, selecting 'oozie' for regex 'oozie*'
Note, selecting 'oozie-2-4-0-0-169-client' for regex 'oozie*'
The following extra packages will be installed:
bigtop-tomcat
The following NEW packages will be installed:
bigtop-tomcat oozie oozie-2-4-0-0-169 oozie-2-4-0-0-169-client
oozie-2-4-0-0-169-server oozie-client oozie-server
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 672 MB of archives.
After this operation, 789 MB of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort. How can I tell Ambari to use the actual version? 🙂 Klaus
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Oozie
10-27-2016
05:49 AM
Hi Josh, I found in the Tracer log file: 2016-10-27 07:23:48,988 [start.Main] ERROR: Thread 'tracer' died.
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /tracers/trace-
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.accumulo.fate.zookeeper.ZooUtil.putEphemeralSequential(ZooUtil.java:463)
at org.apache.accumulo.fate.zookeeper.ZooReaderWriter.putEphemeralSequential(ZooReaderWriter.java:99)
at org.apache.accumulo.tracer.TraceServer.registerInZooKeeper(TraceServer.java:297)
at org.apache.accumulo.tracer.TraceServer.<init>(TraceServer.java:235)
at org.apache.accumulo.tracer.TraceServer.main(TraceServer.java:339)
at org.apache.accumulo.tracer.TracerExecutable.execute(TracerExecutable.java:33)
at org.apache.accumulo.start.Main$1.run(Main.java:93)
at java.lang.Thread.run(Thread.java:745) After deleting the Tracer Zookeeper directory (rmr /tracers) the Tracer process had no problems to start. Many thanks for your support. 🙂 Klaus
... View more
10-26-2016
01:15 PM
Hello, I have a fresh installation of Accumulo and my problem is that the Tracer process terminated with: 2016-10-26 14:56:50,314 [start.Main] ERROR: Thread 'tracer' died.
org.apache.accumulo.core.client.AccumuloException: Internal error processing waitForFateOperation
at org.apache.accumulo.core.client.impl.TableOperationsImpl.doFateOperation(TableOperationsImpl.java:303)
at org.apache.accumulo.core.client.impl.TableOperationsImpl.doFateOperation(TableOperationsImpl.java:261)
at org.apache.accumulo.core.client.impl.TableOperationsImpl.doTableFateOperation(TableOperationsImpl.java:1427)
at org.apache.accumulo.core.client.impl.TableOperationsImpl.create(TableOperationsImpl.java:188)
at org.apache.accumulo.core.client.impl.TableOperationsImpl.create(TableOperationsImpl.java:155)
at org.apache.accumulo.tracer.TraceServer.<init>(TraceServer.java:211)
at org.apache.accumulo.tracer.TraceServer.main(TraceServer.java:339)
at org.apache.accumulo.tracer.TracerExecutable.execute(TracerExecutable.java:33)
at org.apache.accumulo.start.Main$1.run(Main.java:93) No idea why. Could someone help please? 🙂 Klaus
... View more
Labels:
- Labels:
-
Apache Accumulo
08-25-2016
07:02 AM
Hi Robert, I have no logs from TaskManagers in the log dir. I played a bit with the heap.mb size of the Taskmangers and entering 4096 for it, the taskmangers started. Thanks for your interest to help. 🙂 Klaus
... View more
08-24-2016
03:18 PM
Hello, I have a similar issue as discussed here.These are the settings: I see no TaskManagers. The overview shows:
0
Task Managers
0
Task Slots
0
Available Task Slots Running the example word count job I receive /usr/apache/flink-1.1.1/bin# /usr/apache/flink-1.1.1/bin/flink run /usr/apache/flink-1.1.1/examples/streaming/WordCount.jar
Cluster configuration: Standalone cluster with JobManager at dedcm4229/10.79.210.78:6130
Using address dedcm4229:6130 to connect to JobManager.
JobManager web interface address http://dedcm4229:8081
Starting execution of program
Executing WordCount example with default input data set.
Use --input to specify file input.
Printing result to stdout. Use --output to specify output path.
Submitting job with JobID: 47fee79c80eba58333eec5c3c3ee1cf0. Waiting for job completion.
08/24/2016 16:32:07 Job execution switched to status RUNNING.
08/24/2016 16:32:07 Source: Collection Source -> Flat Map(1/1) switched to SCHEDULED
08/24/2016 16:32:07 Job execution switched to status FAILING.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not enough free slots available to run the job. You can decrease the operator parallelism or increase the number of slots per TaskManager in the configuration. Task to schedule: < Attempt #0 (Source: Collection Source -> Flat Map (1/1)) @ (unassigned) - [SCHEDULED] > with groupID < 963af48f2c5d35ff2fcaa1bc235543a7 > in sharing group < SlotSharingGroup [7168183d09cf33bacf5ac595e608bd87, 963af48f2c5d35ff2fcaa1bc235543a7] >. Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:256)
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:306)
at org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:454)
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:326)
at org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:741)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1332)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1291)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1291)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
08/24/2016 16:32:07 Source: Collection Source -> Flat Map(1/1) switched to CANCELED
08/24/2016 16:32:07 Keyed Aggregation -> Sink: Unnamed(1/1) switched to CANCELED
08/24/2016 16:32:07 Job execution switched to status FAILED.
------------------------------------------------------------
The program finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The program execution failed: Job execution failed.
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:413)
at org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:92)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:389)
at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:68)
at org.apache.flink.streaming.examples.wordcount.WordCount.main(WordCount.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:509)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:331)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:777)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:253)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1005)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1048)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not enough free slots available to run the job. You can decrease the operator parallelism or increase the number of slots per TaskManager in the configuration. Task to schedule: < Attempt #0 (Source: Collection Source -> Flat Map (1/1)) @ (unassigned) - [SCHEDULED] > with groupID < 963af48f2c5d35ff2fcaa1bc235543a7 > in sharing group < SlotSharingGroup [7168183d09cf33bacf5ac595e608bd87, 963af48f2c5d35ff2fcaa1bc235543a7] >. Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:256)
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:306)
at org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:454)
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:326)
at org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:741)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:1332)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1291)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:1291)
... 9 more
Could someone have a look into this log above and give advice to fix this issue please? 🙂 Klaus
... View more
Labels:
- Labels:
-
Apache Flink
08-24-2016
09:39 AM
At my site this will work ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init After init no further issues found. Many Thanks for your detailed help 🙂 Klaus
... View more
08-23-2016
11:18 AM
Additional I've done: tables -l
accumulo.metadata => !0
accumulo.replication => +rep
accumulo.root => +r
trace => 1
CheckTables. Scanning stucks. /usr/bin/accumulo admin checkTablets
2016-08-23 12:19:18,521 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
*** Looking for offline tablets ***
Scanning zookeeper
+r<<@(null,de-hd-cluster.data-node3.com:9997[25669407cc8000b],de-hd-cluster.data-node3.com:9997[25669407cc8000b]) is ASSIGNED_TO_DEAD_SERVER #walogs:1
*** Looking for missing files ***
Scanning : accumulo.root (-inf,~ : [] 9223372036854775807 false)
Stats told me /usr/bin/accumulo org.apache.accumulo.test.GetMasterStats
2016-08-23 11:15:21,623 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
State: NORMAL
Goal State: NORMAL
Unassigned tablets: 1
Dead tablet servers count: 0
Tablet Servers
Name: de-hd-cluster.data-node3.com:9997
Ingest: 0.00
Last Contact: 1471943720583
OS Load Average: 0.12
Queries: 0.00
Time Difference: 1.3
Total Records: 0
Lookups: 0
Recoveries: 0
🙂 Klaus
... View more
08-23-2016
08:18 AM
Hello Josh, thanks for your quick reply. I thought that the peaks in the memory usage has something to do with table issue. On the Accumulo monitor page I see now: In recent logs I see only this warning: [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in
hdfs-site.xml: data loss is possible on hard system reset or power loss After a restart I see: 2016-08-23 09:19:30,318 [replication.WorkDriver] DEBUG: Sleeping 30000 ms before next work assignment
2016-08-23 09:19:36,776 [master.Master] DEBUG: Finished gathering information from 1 servers in 0.00 seconds
2016-08-23 09:19:36,776 [master.Master] DEBUG: not balancing because there are unhosted tablets: 1
2016-08-23 09:19:43,087 [recovery.RecoveryManager] DEBUG: Unable to initate log sort for hdfs://de-hd-cluster.name-node.com:8020/apps/accumulo/data/wal/de-hd-cluster.data-node3.com+9997/91ece971-7485-4acf-aa7f-dcde00fafce9: java.io.FileNotFoundException: File does not exist: /apps/accumulo/data/wal/de-hd-cluster.data-node3.com+9997/91ece971-7485-4acf-aa7f-dcde00fafce9
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:2835)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:733)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:663)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
2016-08-23 09:19:43,611 [state.ZooTabletStateStore] DEBUG: root tablet logSet [hdfs://de-hd-cluster.name-node.com:8020/apps/accumulo/data/wal/de-hd-cluster.data-node3.com+9997/91ece971-7485-4acf-aa7f-dcde00fafce9]
2016-08-23 09:19:43,611 [state.ZooTabletStateStore] DEBUG: Returning root tablet state: +r<<@(null,de-hd-cluster.data-node3.com:9997[25669407cc8000b],de-hd-cluster.data-node3.com:9997[25669407cc8000b])
2016-08-23 09:19:43,611 [recovery.RecoveryManager] DEBUG: Recovering hdfs://de-hd-cluster.name-node.com:8020/apps/accumulo/data/wal/de-hd-cluster.data-node3.com+9997/91ece971-7485-4acf-aa7f-dcde00fafce9 to hdfs://de-hd-cluster.name-node.com:8020/apps/accumulo/data/recovery/91ece971-7485-4acf-aa7f-dcde00fafce9
2016-08-23 09:19:43,614 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.master.recovery.HadoopLogCloser
2016-08-23 09:19:43,615 [recovery.RecoveryManager] INFO : Starting recovery of hdfs://de-hd-cluster.name-node.com:8020/apps/accumulo/data/wal/de-hd-cluster.data-node3.com+9997/91ece971-7485-4acf-aa7f-dcde00fafce9 (in : 300s), tablet +r<< holds a reference
2016-08-23 09:19:43,615 [master.Master] DEBUG: [Root Table]: scan time 0.00 seconds
2016-08-23 09:19:43,615 [master.Master] DEBUG: [Root Table] sleeping for 60.00 seconds
2016-08-23 09:19:46,779 [master.Master] DEBUG: Finished gathering information from 1 servers in 0.00 seconds
2016-08-23 09:19:46,779 [master.Master] DEBUG: not balancing because there are unhosted tablets: 1
2016-08-23 09:19:56,782 [master.Master] DEBUG: Finished gathering information from 1 servers in 0.00 seconds
2016-08-23 09:19:56,782 [master.Master] DEBUG: not balancing because there are unhosted tablets: 1
2016-08-23 09:20:00,318 [replication.WorkDriver] DEBUG: Sleeping 30000 ms before next work assignment
2016-08-23 09:20:06,785 [master.Master] DEBUG: Finished gathering information from 1 servers in 0.00 seconds
2016-08-23 09:20:06,785 [master.Master] DEBUG: not balancing because there are unhosted tablets: 1
2016-08-23 09:20:16,788 [master.Master] DEBUG: Finished gathering information from 1 servers in 0.00 seconds
2016-08-23 09:20:16,788 [master.Master] DEBUG: not balancing because there are unhosted tablets: 1 2016-08-23 09:24:44,144 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.master.recovery.HadoopLogCloser
2016-08-23 09:24:44,144 [recovery.RecoveryManager] INFO : Starting recovery of hdfs://de-hd-cluster.name-node.com:8020/apps/accumulo/data/wal/de-hd-cluster.data-node3.com+9997/91ece971-7485-4acf-aa7f-dcde00fafce9 (in : 300s), tablet +r<< holds a reference Here the tables in Hadoop: root@NameNode:~# hadoop fs -ls -R /apps/accumulo/data/tables/
drwxr-xr-x - accumulo hdfs 0 2016-04-19 14:16 /apps/accumulo/data/tables/!0
drwxr-xr-x - accumulo hdfs 0 2016-08-08 13:33 /apps/accumulo/data/tables/!0/default_tablet
-rw-r--r-- 3 accumulo hdfs 871 2016-08-08 13:33 /apps/accumulo/data/tables/!0/default_tablet/F0002flt.rf
drwxr-xr-x - accumulo hdfs 0 2016-08-10 10:57 /apps/accumulo/data/tables/!0/table_info
-rw-r--r-- 3 accumulo hdfs 933 2016-08-08 10:14 /apps/accumulo/data/tables/!0/table_info/A0002bqu.rf
-rw-r--r-- 3 accumulo hdfs 933 2016-08-08 10:19 /apps/accumulo/data/tables/!0/table_info/A0002bqx.rf
-rw-r--r-- 3 accumulo hdfs 122 2016-08-10 10:57 /apps/accumulo/data/tables/!0/table_info/A004gpfm.rf_tmp
-rw-r--r-- 3 accumulo hdfs 688 2016-08-08 13:33 /apps/accumulo/data/tables/!0/table_info/F0002fl0.rf
drwxr-xr-x - accumulo hdfs 0 2016-04-19 14:16 /apps/accumulo/data/tables/+r
drwxr-xr-x - accumulo hdfs 0 2016-08-10 10:57 /apps/accumulo/data/tables/+r/root_tablet
-rw-r--r-- 3 accumulo hdfs 974 2016-08-08 10:19 /apps/accumulo/data/tables/+r/root_tablet/A0002bqz.rf
-rw-r--r-- 3 accumulo hdfs 16 2016-08-10 10:57 /apps/accumulo/data/tables/+r/root_tablet/A004gpfl.rf_tmp
-rw-r--r-- 3 accumulo hdfs 754 2016-08-10 10:13 /apps/accumulo/data/tables/+r/root_tablet/C004eodm.rf
-rw-r--r-- 3 accumulo hdfs 364 2016-08-10 10:18 /apps/accumulo/data/tables/+r/root_tablet/F004ew4v.rf
-rw-r--r-- 3 accumulo hdfs 364 2016-08-10 10:29 /apps/accumulo/data/tables/+r/root_tablet/F004fdch.rf
-rw-r--r-- 3 accumulo hdfs 364 2016-08-10 10:34 /apps/accumulo/data/tables/+r/root_tablet/F004fn1f.rf
-rw-r--r-- 3 accumulo hdfs 364 2016-08-10 10:39 /apps/accumulo/data/tables/+r/root_tablet/F004ftix.rf
-rw-r--r-- 3 accumulo hdfs 364 2016-08-10 10:44 /apps/accumulo/data/tables/+r/root_tablet/F004g3af.rf
-rw-r--r-- 3 accumulo hdfs 364 2016-08-10 10:54 /apps/accumulo/data/tables/+r/root_tablet/F004glat.rf
drwxr-xr-x - accumulo hdfs 0 2016-04-19 14:16 /apps/accumulo/data/tables/+rep
drwxr-xr-x - accumulo hdfs 0 2016-04-19 14:16 /apps/accumulo/data/tables/+rep/default_tablet
drwxr-xr-x - accumulo hdfs 0 2016-04-19 14:18 /apps/accumulo/data/tables/1
drwxr-xr-x - accumulo hdfs 0 2016-08-10 10:57 /apps/accumulo/data/tables/1/default_tablet
-rw-r--r-- 3 accumulo hdfs 2524936 2016-07-23 23:11 /apps/accumulo/data/tables/1/default_tablet/A0002041.rf
-rw-r--r-- 3 accumulo hdfs 1502864 2016-07-29 11:17 /apps/accumulo/data/tables/1/default_tablet/C00024ci.rf
-rw-r--r-- 3 accumulo hdfs 899175 2016-08-03 18:50 /apps/accumulo/data/tables/1/default_tablet/C00028be.rf
-rw-r--r-- 3 accumulo hdfs 1428721 2016-08-07 13:21 /apps/accumulo/data/tables/1/default_tablet/C0002av5.rf
-rw-r--r-- 3 accumulo hdfs 211245 2016-08-08 05:11 /apps/accumulo/data/tables/1/default_tablet/C0002bj6.rf
-rw-r--r-- 3 accumulo hdfs 30474 2016-08-08 07:42 /apps/accumulo/data/tables/1/default_tablet/C0002bn1.rf
-rw-r--r-- 3 accumulo hdfs 50286 2016-08-08 10:03 /apps/accumulo/data/tables/1/default_tablet/C0002bqh.rf
-rw-r--r-- 3 accumulo hdfs 122 2016-08-10 10:57 /apps/accumulo/data/tables/1/default_tablet/C004gpfk.rf_tmp
-rw-r--r-- 3 accumulo hdfs 905 2016-08-08 13:28 /apps/accumulo/data/tables/1/default_tablet/F0002byb.rf
The command: root@hdp-accumulo-instance> scan -np -t accumulo.root hangs. Do you know how can I get rid of this table? 🙂 Klaus
... View more
08-22-2016
07:14 AM
Hello, I receive the following messages from Accumulo every 10 seconds: monitor_de-hd-cluster.name-node.com.debug.log: 2016-08-22 07:43:14,841 [impl.ThriftScanner] DEBUG: Failed to locate tablet for table : !0 row : ~err_
2016-08-22 07:43:23,167 [monitor.Monitor] INFO : Failed to obtain problem reports
java.lang.RuntimeException: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:161)
at org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:252)
at org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:310)
at org.apache.accumulo.monitor.Monitor.fetchData(Monitor.java:346)
at org.apache.accumulo.monitor.Monitor$1.run(Monitor.java:486)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:230)
at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:80)
at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:151)
... 6 more
2016-08-22 07:43:23,510 [impl.ThriftScanner] DEBUG: Failed to locate tablet for table : !0 row : ~err_
2016-08-22 07:43:26,533 [monitor.Monitor] INFO : Failed to obtain problem reports
java.lang.RuntimeException: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:161)
at org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:252)
at org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:310)
at org.apache.accumulo.monitor.Monitor.fetchData(Monitor.java:346)
at org.apache.accumulo.monitor.Monitor$1.run(Monitor.java:486)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:230)
at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:80)
at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:151)
... 6 more After stopping Accumulo the alternating memory usage was gone. The cluster is not used by anyone and has nothing to do. Attached all debug log files after a restart of Accumulo. Could anyone assist? 🙂 Klaus
... View more
Labels:
- Labels:
-
Apache Accumulo
-
Apache Hadoop
08-11-2016
10:03 AM
1 Kudo
Hello, in this HDP version Spark 1.6.0.2.4 was installed during cluster installation. Now we want to play with SAP VORA 1.2 bad sadly will work with Spark 1.5.1 or 1.5.2 only. I believe I have 2 options: 1) Deinstall Spark 1.6 and install the 1.5.2 version. Problem: from where can I download a package ready to use for Amabri 2) Installing it manually in a different directory (e.g. /usr/apache/spark-1.5.2) from http://d3kbcqa49mib13.cloudfront.net/spark-1.5.2-bin-hadoop2.4.tgz. Problem: How to setup a 1.5.2 environment with no effects on the installed 1.6 version. 🙂 Klaus
... View more
Labels:
05-26-2016
12:00 PM
Which clients are affected is not the case. As you described this was documented in Ambari very well. The problem was starting the Accumulo shell in /usr/hdp/2.4.0.0-169/accumulo/bin on the namenode (hosting Master, Monitor, Trace and CG): root@NameNode:/usr/hdp/2.4.0.0-169/accumulo/bin# accumulo
JAVA_HOME is not set or is not a directory. Please make sure it's set globally or in conf/accumulo-env.sh Starting it on the datanode hosting the TServer, I was able to start the shell (both nodes have no JAVA_HOME entry in the environment) and to set the values to 0 meaning use the replica value from hdfs. root@hdp-accumulo-instance> config -t accumulo.metadata -f table.file.replication
SCOPE | NAME | VALUE
default | table.file.replication .. | 0
root@hdp-accumulo-instance> config -t accumulo.metadata -f table.file.replication
SCOPE | NAME | VALUE
default | table.file.replication .. | 0 Finishing with: hdfs dfs -setrep -w 3 / and the problem was solved. Many thanks for you Support. 🙂 Klaus
... View more
05-25-2016
07:40 AM
Hello Billie sounds good but Ambari has installed Accumulo and has not set any environment variables needed by the Accumolo shell script (/usr/hdp/2.4.0.0-169/accumulo/bin/accumulo). Could I set the parameters with Ambari e.g. in "custom accumulo-site"?
... View more
05-24-2016
08:51 AM
Hello Sagar! I reviewed this already. This blog concerns to mapred. As I wrote the setrep command reduce the replicas but not any new incomming ones. 🙂 Klaus
... View more
05-24-2016
08:36 AM
Hello, I have 1 NameNode and 3 DataNodes using the default in >dfs.replication< (3). "hdfs fsck /" shows this example output: ........
/apps/accumulo/data/tables/!0/table_info/A0000ncg.rf: Under replicated BP-1501447354-10.79.210.78-1461068133478:blk_1073774243_33440. Target Replicas is 5 but found 3 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
.
.............................................................................................
/user/accumulo/.Trash/Current/apps/accumulo/data/tables/!0/table_info/A0000ncf.rf: Under replicated BP-1501447354-10.79.210.78-1461068133478:blk_1073774242_33439. Target Replicas is 5 but found 3 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
.......... After "hdfs dfs -setrep -w 3 /" the messages are gone but after a while shown again. How and where can I define that accumulo will use 3 replicas or is it a config issue of HDFS? 🙂 Klaus
... View more
Labels:
- Labels:
-
Apache Accumulo
-
Apache Hadoop
04-22-2016
04:31 AM
I solved the problem with release (dhclient -r) and renew (dhclient) the IP address. @all: Thanks for your Support 🙂 Klaus
... View more
04-22-2016
03:53 AM
Hi drussel, as I wrote I want to access Ambari from the VM and I found no preinstalled local GUI/Web browser on it. It is clear that I can't access Ambari from my local PC in case of the used loopback address. Why configure Hortonworks this VM in that way and not as other offered VM from Hortonworks with DHCP usage?
... View more
04-21-2016
12:20 PM
In the above Tutorial the virtual machine HDP-Atlas-Ranger-TP Ambari UI should be available at http://127.0.0.1:8080/. This is a problem because the VM has no GUI preinstalled or not stated in the tutorial. Regards Klaus
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
04-21-2016
12:18 PM
In the above Tutorial the virtual machine HDP-Atlas-Ranger-TP Ambari UI should be available at http://127.0.0.1:8080/. This is a problem because the VM has no GUI preinstalled or not stated in the tutorial.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
03-17-2016
04:44 AM
1 Kudo
Hello Benjamin, many thanks for your explanations. I will Forward it to the Hyper-V admin. 🙂 Klaus
... View more
03-16-2016
06:49 AM
2 Kudos
Hello, I'm new with Hadoop and I have a design question: We plan to install Hadoop under Ubuntu on Hyper-V machines. The Hyper-V admin defined a default memory allocation of 1GB and a maximum allocation of 32GB. Is the Hadoop framework able to allocate the additional 31GB if needed? Regards Klaus
... View more
Labels:
- Labels:
-
Apache Hadoop
03-10-2016
07:54 AM
1 Kudo
ok found that I my used password containing an underscore is the problem (code translation?).
... View more
03-10-2016
06:59 AM
1 Kudo
Hello, I have the same problem. First I logged on to the sandbox and have to change the default password "hadoop" to my preferred on. Logon is possible: Next I tried to logged on via the web shell "ip-adr:4200" but the password was not accepted: Also not possible via putty port 22: ssh in the sandbox is possible: The Access rights: Any ideas? 🙂 Klaus
... View more
03-02-2016
11:46 PM
Why are the previous uploaded images in this threat deleted? 🙂 Klaus
... View more
03-02-2016
11:32 PM
Hi Cy, I started from the scratch now with 12GB RAM and the error was gone. The only entry in "6.start-embedded-db.log" is now: * Cloudera manager database started But the express wizard procedure (localhost:7180/cmf/express-wizard/welcome) failed. I used the standard selections. Only changed the number of simultanious installation from 10 to 1. The cloudera-scm-agent.out: Using the help hint: The supposedly missing config file exists: The cloudera-scm-agent.log [03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO SCM Agent Version: 5.6.0
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Agent Protocol Version: 4
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Using Host ID: 8fc65a36-0bde-4bd6-9814-7aed456bb8df
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Using directory: /run/cloudera-scm-agent
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Using supervisor binary path: /usr/lib/cmf/agent/src/cmf/../../build/env/bin/supervisord
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Neither verify_cert_file nor verify_cert_dir are configured. Not performing validation of server certificates in HTTPS communication. These options can be configured in this agent's config.ini file to enable certificate validation.
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Agent Logging Level: INFO
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO No command line vars
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Missing database jar: /usr/share/java/mysql-connector-java.jar (normal, if you're not using this database type)
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Missing database jar: /usr/share/java/oracle-connector-java.jar (normal, if you're not using this database type)
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Found database jar: /usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Agent starting as pid 8937 user root(0) group root(0).
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent WARNING Expected mode 0751 for /run/cloudera-scm-agent but was 0755
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Created /run/cloudera-scm-agent/cgroups
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Chmod'ing /run/cloudera-scm-agent/cgroups to 0751
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Found cgroups subsystem: cpu
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO cgroup pseudofile /tmp/tmp1TrXNT/cpu.rt_runtime_us does not exist, skipping
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Found cgroups subsystem: cpuacct
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Found cgroups subsystem: blkio
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Found cgroups subsystem: memory
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Created /run/cloudera-scm-agent/cgroups/memory
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Created /run/cloudera-scm-agent/cgroups/cpu
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Created /run/cloudera-scm-agent/cgroups/cpuacct
[03/Mar/2016 07:17:05 +0000] 8937 MainThread cgroups INFO Created /run/cloudera-scm-agent/cgroups/blkio
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Found cgroups capabilities: {'has_memory': True, 'default_memory_limit_in_bytes': 8796093022207, 'default_memory_soft_limit_in_bytes': 8796093022207, 'writable_cgroup_dot_procs': True, 'default_cpu_rt_runtime_us': -1, 'has_cpu': True, 'default_blkio_weight': 1000, 'default_cpu_shares': 1024, 'has_cpuacct': True, 'has_blkio': True}
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Setting up supervisord event monitor.
[03/Mar/2016 07:17:05 +0000] 8937 MainThread filesystem_map INFO Monitored nodev filesystem types: ['nfs', 'nfs4', 'tmpfs']
[03/Mar/2016 07:17:05 +0000] 8937 MainThread filesystem_map INFO Using timeout of 2.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread filesystem_map INFO Using join timeout of 0.100000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread filesystem_map INFO Using tolerance of 60.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread filesystem_map INFO Local filesystem types whitelist: ['ext2', 'ext3', 'ext4']
[03/Mar/2016 07:17:05 +0000] 8937 MainThread kt_renewer INFO Agent wide credential cache set to /run/cloudera-scm-agent/krb5cc_cm_agent_0
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Using metrics_url_timeout_seconds of 30.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Using task_metrics_timeout_seconds of 5.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Using max_collection_wait_seconds of 10.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread metrics INFO Importing tasktracker metric schema from file /usr/lib/cmf/agent/src/cmf/monitor/tasktracker/schema.json
[03/Mar/2016 07:17:05 +0000] 8937 MainThread ntp_monitor INFO Using timeout of 2.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread dns_names INFO Using timeout of 30.000000
[03/Mar/2016 07:17:05 +0000] 8937 MainThread __init__ INFO Created DNS monitor.
[03/Mar/2016 07:17:05 +0000] 8937 MainThread stacks_collection_manager INFO Using max_uncompressed_file_size_bytes: 5242880
[03/Mar/2016 07:17:05 +0000] 8937 MainThread __init__ INFO Importing metric schema from file /usr/lib/cmf/agent/src/cmf/monitor/schema.json
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Supervised processes will add the following to their environment (in addition to the supervisor's env): {'CDH_PARQUET_HOME': '/usr/lib/parquet', 'JSVC_HOME': '/usr/libexec/bigtop-utils', 'CMF_PACKAGE_DIR': '/usr/lib/cmf/service', 'CDH_HADOOP_BIN': '/usr/bin/hadoop', 'MGMT_HOME': '/usr/share/cmf', 'CDH_IMPALA_HOME': '/usr/lib/impala', 'CDH_YARN_HOME': '/usr/lib/hadoop-yarn', 'CDH_HDFS_HOME': '/usr/lib/hadoop-hdfs', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games', 'CDH_HUE_PLUGINS_HOME': '/usr/lib/hadoop', 'CM_STATUS_CODES': u'STATUS_NONE HDFS_DFS_DIR_NOT_EMPTY HBASE_TABLE_DISABLED HBASE_TABLE_ENABLED JOBTRACKER_IN_STANDBY_MODE YARN_RM_IN_STANDBY_MODE', 'KEYTRUSTEE_KP_HOME': '/usr/share/keytrustee-keyprovider', 'CLOUDERA_ORACLE_CONNECTOR_JAR': '/usr/share/java/oracle-connector-java.jar', 'CDH_SQOOP2_HOME': '/usr/lib/sqoop2', 'KEYTRUSTEE_SERVER_HOME': '/usr/lib/keytrustee-server', 'CDH_MR2_HOME': '/usr/lib/hadoop-mapreduce', 'HIVE_DEFAULT_XML': '/etc/hive/conf.dist/hive-default.xml', 'CLOUDERA_POSTGRESQL_JDBC_JAR': '/usr/share/cmf/lib/postgresql-9.0-801.jdbc4.jar', 'CDH_KMS_HOME': '/usr/lib/hadoop-kms', 'CDH_HBASE_HOME': '/usr/lib/hbase', 'CDH_SQOOP_HOME': '/usr/lib/sqoop', 'WEBHCAT_DEFAULT_XML': '/etc/hive-webhcat/conf.dist/webhcat-default.xml', 'CDH_OOZIE_HOME': '/usr/lib/oozie', 'CDH_ZOOKEEPER_HOME': '/usr/lib/zookeeper', 'CDH_HUE_HOME': '/usr/lib/hue', 'CLOUDERA_MYSQL_CONNECTOR_JAR': '/usr/share/java/mysql-connector-java.jar', 'CDH_HBASE_INDEXER_HOME': '/usr/lib/hbase-solr', 'CDH_MR1_HOME': '/usr/lib/hadoop-0.20-mapreduce', 'CDH_SOLR_HOME': '/usr/lib/solr', 'CDH_PIG_HOME': '/usr/lib/pig', 'CDH_SENTRY_HOME': '/usr/lib/sentry', 'CDH_CRUNCH_HOME': '/usr/lib/crunch', 'CDH_LLAMA_HOME': '/usr/lib/llama/', 'CDH_HTTPFS_HOME': '/usr/lib/hadoop-httpfs', 'CDH_HADOOP_HOME': '/usr/lib/hadoop', 'CDH_HIVE_HOME': '/usr/lib/hive', 'CDH_HCAT_HOME': '/usr/lib/hive-hcatalog', 'CDH_KAFKA_HOME': '/usr/lib/kafka', 'CDH_SPARK_HOME': '/usr/lib/spark', 'TOMCAT_HOME': '/usr/lib/bigtop-tomcat', 'CDH_FLUME_HOME': '/usr/lib/flume-ng'}
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO To override these variables, use /etc/cloudera-scm-agent/config.ini. Environment variables for CDH locations are not used when CDH is installed from parcels.
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Created /run/cloudera-scm-agent/process
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Chmod'ing /run/cloudera-scm-agent/process to 0751
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Created /run/cloudera-scm-agent/supervisor
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Chmod'ing /run/cloudera-scm-agent/supervisor to 0751
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Created /run/cloudera-scm-agent/supervisor/include
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent INFO Chmod'ing /run/cloudera-scm-agent/supervisor/include to 0751
[03/Mar/2016 07:17:05 +0000] 8937 MainThread agent ERROR Failed to connect to previous supervisor.
Traceback (most recent call last):
File "/usr/lib/cmf/agent/src/cmf/agent.py", line 1660, in find_or_start_supervisor
self.configure_supervisor_clients()
File "/usr/lib/cmf/agent/src/cmf/agent.py", line 1907, in configure_supervisor_clients
supervisor_options.realize(args=["-c", os.path.join(self.supervisor_dir, "supervisord.conf")])
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 1578, in realize
Options.realize(self, *arg, **kw)
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 312, in realize
self.process_config()
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 320, in process_config
self.process_config_file(do_usage)
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 355, in process_config_file
self.usage(str(msg))
File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 143, in usage
self.exit(2)
SystemExit: 2
[03/Mar/2016 07:17:05 +0000] 8937 MainThread tmpfs INFO Successfully mounted tmpfs at /run/cloudera-scm-agent/process
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Trying to connect to newly launched supervisor (Attempt 1)
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Supervisor version: 3.0
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Successfully connected to supervisor
[03/Mar/2016 07:17:07 +0000] 8937 MainThread status_server INFO Using maximum impala profile bundle size of 1073741824 bytes.
[03/Mar/2016 07:17:07 +0000] 8937 MainThread status_server INFO Using maximum stacks log bundle size of 1073741824 bytes.
[03/Mar/2016 07:17:07 +0000] 8937 MainThread _cplogging INFO [03/Mar/2016:07:17:07] ENGINE Bus STARTING
[03/Mar/2016 07:17:07 +0000] 8937 MainThread _cplogging INFO [03/Mar/2016:07:17:07] ENGINE Started monitor thread '_TimeoutMonitor'.
[03/Mar/2016 07:17:07 +0000] 8937 MainThread _cplogging INFO [03/Mar/2016:07:17:07] ENGINE Serving on ubuntu:9000
[03/Mar/2016 07:17:07 +0000] 8937 MainThread _cplogging INFO [03/Mar/2016:07:17:07] ENGINE Bus STARTED
[03/Mar/2016 07:17:07 +0000] 8937 MainThread __init__ INFO New monitor: (<cmf.monitor.host.HostMonitor object at 0x7f1c1734df90>,)
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Setting default socket timeout to 30
[03/Mar/2016 07:17:07 +0000] 8937 MonitorDaemon-Scheduler __init__ INFO Monitor ready to report: ('HostMonitor',)
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Using parcels directory from server provided value: /opt/cloudera/parcels
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Created /opt/cloudera/parcels
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Chowning /opt/cloudera/parcels to root (0) root (0)
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Chmod'ing /opt/cloudera/parcels to 0755
[03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Created /opt/cloudera/parcel-cache [03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Chowning /opt/cloudera/parcel-cache to root (0) root (0) [03/Mar/2016 07:17:07 +0000] 8937 MainThread agent INFO Chmod'ing /opt/cloudera/parcel-cache to 0755 [03/Mar/2016 07:17:07 +0000] 8937 MainThread parcel INFO Agent does create users/groups and apply file permissions [03/Mar/2016 07:17:07 +0000] 8937 MainThread downloader INFO Downloader path: /opt/cloudera/parcel-cache [03/Mar/2016 07:17:07 +0000] 8937 MainThread parcel_cache INFO Using /opt/cloudera/parcel-cache for parcel cache [03/Mar/2016 07:17:09 +0000] 8937 MainThread agent INFO Active parcel list updated; recalculating component info. [03/Mar/2016 07:17:09 +0000] 8937 MainThread version_detector INFO Identified java component java6 with full version JAVA_HOME=/usr/lib/jvm/j2sdk1.6-oracle java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) for requested version 6. [03/Mar/2016 07:17:09 +0000] 8937 MainThread version_detector INFO Identified java component java7 with full version JAVA_HOME=/usr/lib/jvm/java-7-oracle-cloudera java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) for requested version 7. [03/Mar/2016 07:17:12 +0000] 8937 Monitor-HostMonitor throttling_logger ERROR Failed to collect NTP metrics Traceback (most recent call last): File "/usr/lib/cmf/agent/src/cmf/monitor/host/ntp_monitor.py", line 37, in collect result, stdout, stderr = self._subprocess_with_timeout(args, self._timeout) File "/usr/lib/cmf/agent/src/cmf/monitor/host/ntp_monitor.py", line 30, in _subprocess_with_timeout return subprocess_with_timeout(args, timeout) File "/usr/lib/cmf/agent/src/cmf/subprocess_timeout.py", line 49, in subprocess_with_timeout p = subprocess.Popen(**kwargs) File "/usr/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory [03/Mar/2016 07:17:37 +0000] 8937 DnsResolutionMonitor throttling_logger INFO Using java location: '/usr/lib/jvm/java-7-oracle-cloudera/bin/java'. [03/Mar/2016 07:19:10 +0000] 8937 MainThread version_detector INFO Identified java component java6 with full version JAVA_HOME=/usr/lib/jvm/j2sdk1.6-oracle java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) for requested version 6. [03/Mar/2016 07:19:10 +0000] 8937 MainThread version_detector INFO Identified java component java7 with full version JAVA_HOME=/usr/lib/jvm/java-7-oracle-cloudera java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) for requested version 7. [03/Mar/2016 07:21:10 +0000] 8937 MainThread version_detector INFO Identified java component java6 with full version JAVA_HOME=/usr/lib/jvm/j2sdk1.6-oracle java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) for requested version 6. Again need help. 🙂 Klaus
... View more
03-01-2016
10:38 PM
I added more memory to the VM. The error still exists but the installation was succesful. /usr/share/cmf/bin/initialize_embedded_db.sh: Zeile 275: let: Unable to find amount of RAM on the system, giving up: Syntaxfehler im Ausdruck. (Fehlerverursachendes Zeichen ist »to find amount of RAM on the system, giving up«).
/usr/share/cmf/bin/initialize_embedded_db.sh: Zeile 276: [: : Ganzzahliger Ausdruck erwartet.
* Cloudera manager database started Thank you for your support. 🙂 Klaus
... View more
03-01-2016
10:20 PM
I used cloudera-manager-installer.bin from http://archive.cloudera.com/cm5/installer/latest/. Yes the message comes from /var/log/cloudera-manager-installer/6.start-embedded-db.log Here the install procedure: These are the existing log files Here the content of the log files. Unfortunately partly in German because of the system language. 0.check-selinux.log sh: 1: /usr/sbin/selinuxenabled: not found 1.install-repo-pkg.log Vormals nicht ausgewähltes Paket cloudera-manager-repository wird gewählt.
(Lese Datenbank ... 199341 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../cloudera-manager-repository_5.0_all.deb ...
Entpacken von cloudera-manager-repository (5.0) ...
cloudera-manager-repository (5.0) wird eingerichtet ...
gpg: Schlüsselbund `/etc/apt/secring.gpg' erstellt
gpg: Schlüsselbund `/etc/apt/trusted.gpg.d/cloudera-cm5.gpg' erstellt
gpg: Schlüssel 02A818DD: Öffentlicher Schlüssel "Cloudera Apt Repository" importiert
gpg: Anzahl insgesamt bearbeiteter Schlüssel: 1
gpg: importiert: 1 2.refresh-repo.log Ign http://archive.canonical.com trusty InRelease
OK http://archive.cloudera.com trusty-cm5 InRelease
Ign http://extras.ubuntu.com trusty InRelease
Ign http://de.archive.ubuntu.com trusty InRelease
OK http://archive.canonical.com trusty Release.gpg
OK http://extras.ubuntu.com trusty Release.gpg
OK http://archive.cloudera.com trusty-cm5/contrib Sources
OK http://archive.canonical.com trusty Release
OK http://extras.ubuntu.com trusty Release
OK http://archive.cloudera.com trusty-cm5/contrib amd64 Packages
OK http://archive.canonical.com trusty/partner Sources
OK http://extras.ubuntu.com trusty/main Sources
Holen: 1 http://de.archive.ubuntu.com trusty-updates InRelease [65.9 kB]
OK http://archive.canonical.com trusty/partner amd64 Packages
OK http://extras.ubuntu.com trusty/main amd64 Packages
OK http://de.archive.ubuntu.com trusty-backports InRelease
OK http://archive.canonical.com trusty/partner i386 Packages
OK http://extras.ubuntu.com trusty/main i386 Packages
OK http://archive.canonical.com trusty/partner Translation-en
Holen: 2 http://de.archive.ubuntu.com trusty-security InRelease [65.9 kB]
OK http://de.archive.ubuntu.com trusty Release.gpg
Holen: 3 http://de.archive.ubuntu.com trusty-updates/main Sources [260 kB]
Holen: 4 http://de.archive.ubuntu.com trusty-updates/restricted Sources [5,352 B]
Holen: 5 http://de.archive.ubuntu.com trusty-updates/universe Sources [150 kB]
Holen: 6 http://de.archive.ubuntu.com trusty-updates/multiverse Sources [5,547 B]
Ign http://extras.ubuntu.com trusty/main Translation-de_DE
Ign http://extras.ubuntu.com trusty/main Translation-de
Ign http://extras.ubuntu.com trusty/main Translation-en_GB
Ign http://extras.ubuntu.com trusty/main Translation-en
Holen: 7 http://de.archive.ubuntu.com trusty-updates/main amd64 Packages [709 kB]
Ign http://archive.cloudera.com trusty-cm5/contrib Translation-de_DE
Ign http://archive.cloudera.com trusty-cm5/contrib Translation-de
Holen: 8 http://de.archive.ubuntu.com trusty-updates/restricted amd64 Packages [15.9 kB]
Ign http://archive.cloudera.com trusty-cm5/contrib Translation-en_GB
Ign http://archive.cloudera.com trusty-cm5/contrib Translation-en
Holen: 9 http://de.archive.ubuntu.com trusty-updates/universe amd64 Packages [338 kB]
Holen: 10 http://de.archive.ubuntu.com trusty-updates/multiverse amd64 Packages [13.2 kB]
Holen: 11 http://de.archive.ubuntu.com trusty-updates/main i386 Packages [688 kB]
Holen: 12 http://de.archive.ubuntu.com trusty-updates/restricted i386 Packages [15.6 kB]
Holen: 13 http://de.archive.ubuntu.com trusty-updates/universe i386 Packages [339 kB]
Holen: 14 http://de.archive.ubuntu.com trusty-updates/multiverse i386 Packages [13.4 kB]
OK http://de.archive.ubuntu.com trusty-updates/main Translation-en
OK http://de.archive.ubuntu.com trusty-updates/multiverse Translation-en
OK http://de.archive.ubuntu.com trusty-updates/restricted Translation-en
OK http://de.archive.ubuntu.com trusty-updates/universe Translation-en
OK http://de.archive.ubuntu.com trusty-backports/main Sources
OK http://de.archive.ubuntu.com trusty-backports/restricted Sources
OK http://de.archive.ubuntu.com trusty-backports/universe Sources
OK http://de.archive.ubuntu.com trusty-backports/multiverse Sources
OK http://de.archive.ubuntu.com trusty-backports/main amd64 Packages
OK http://de.archive.ubuntu.com trusty-backports/restricted amd64 Packages
OK http://de.archive.ubuntu.com trusty-backports/universe amd64 Packages
OK http://de.archive.ubuntu.com trusty-backports/multiverse amd64 Packages
OK http://de.archive.ubuntu.com trusty-backports/main i386 Packages
OK http://de.archive.ubuntu.com trusty-backports/restricted i386 Packages
OK http://de.archive.ubuntu.com trusty-backports/universe i386 Packages
OK http://de.archive.ubuntu.com trusty-backports/multiverse i386 Packages
OK http://de.archive.ubuntu.com trusty-backports/main Translation-en
OK http://de.archive.ubuntu.com trusty-backports/multiverse Translation-en
OK http://de.archive.ubuntu.com trusty-backports/restricted Translation-en
OK http://de.archive.ubuntu.com trusty-backports/universe Translation-en
Holen: 15 http://de.archive.ubuntu.com trusty-security/main Sources [105 kB]
Holen: 16 http://de.archive.ubuntu.com trusty-security/restricted Sources [4,035 B]
Holen: 17 http://de.archive.ubuntu.com trusty-security/universe Sources [33.3 kB]
Holen: 18 http://de.archive.ubuntu.com trusty-security/multiverse Sources [2,767 B]
Holen: 19 http://de.archive.ubuntu.com trusty-security/main amd64 Packages [427 kB]
Holen: 20 http://de.archive.ubuntu.com trusty-security/restricted amd64 Packages [13.0 kB]
Holen: 21 http://de.archive.ubuntu.com trusty-security/universe amd64 Packages [124 kB]
Holen: 22 http://de.archive.ubuntu.com trusty-security/multiverse amd64 Packages [4,990 B]
Holen: 23 http://de.archive.ubuntu.com trusty-security/main i386 Packages [400 kB]
Holen: 24 http://de.archive.ubuntu.com trusty-security/restricted i386 Packages [12.7 kB]
Holen: 25 http://de.archive.ubuntu.com trusty-security/universe i386 Packages [124 kB]
Holen: 26 http://de.archive.ubuntu.com trusty-security/multiverse i386 Packages [5,164 B]
OK http://de.archive.ubuntu.com trusty-security/main Translation-en
OK http://de.archive.ubuntu.com trusty-security/multiverse Translation-en
OK http://de.archive.ubuntu.com trusty-security/restricted Translation-en
OK http://de.archive.ubuntu.com trusty-security/universe Translation-en
OK http://de.archive.ubuntu.com trusty Release
OK http://de.archive.ubuntu.com trusty/main Sources
OK http://de.archive.ubuntu.com trusty/restricted Sources
OK http://de.archive.ubuntu.com trusty/universe Sources
OK http://de.archive.ubuntu.com trusty/multiverse Sources
OK http://de.archive.ubuntu.com trusty/main amd64 Packages
OK http://de.archive.ubuntu.com trusty/restricted amd64 Packages
OK http://de.archive.ubuntu.com trusty/universe amd64 Packages
OK http://de.archive.ubuntu.com trusty/multiverse amd64 Packages
OK http://de.archive.ubuntu.com trusty/main i386 Packages
OK http://de.archive.ubuntu.com trusty/restricted i386 Packages
OK http://de.archive.ubuntu.com trusty/universe i386 Packages
OK http://de.archive.ubuntu.com trusty/multiverse i386 Packages
OK http://de.archive.ubuntu.com trusty/main Translation-de
OK http://de.archive.ubuntu.com trusty/main Translation-en_GB
OK http://de.archive.ubuntu.com trusty/main Translation-en
OK http://de.archive.ubuntu.com trusty/multiverse Translation-de
OK http://de.archive.ubuntu.com trusty/multiverse Translation-en_GB
OK http://de.archive.ubuntu.com trusty/multiverse Translation-en
OK http://de.archive.ubuntu.com trusty/restricted Translation-de
OK http://de.archive.ubuntu.com trusty/restricted Translation-en_GB
OK http://de.archive.ubuntu.com trusty/restricted Translation-en
OK http://de.archive.ubuntu.com trusty/universe Translation-de
OK http://de.archive.ubuntu.com trusty/universe Translation-en_GB
OK http://de.archive.ubuntu.com trusty/universe Translation-en
Ign http://de.archive.ubuntu.com trusty/main Translation-de_DE
Ign http://de.archive.ubuntu.com trusty/multiverse Translation-de_DE
Ign http://de.archive.ubuntu.com trusty/restricted Translation-de_DE
Ign http://de.archive.ubuntu.com trusty/universe Translation-de_DE
Es wurden 3,941 kB in 6 s geholt (609 kB/s).
Paketlisten werden gelesen...
3.install-oracle-j2sdk1.7.log Paketlisten werden gelesen...
Abhängigkeitsbaum wird aufgebaut....
Statusinformationen werden eingelesen....
Die folgenden NEUEN Pakete werden installiert:
oracle-j2sdk1.7
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 2 nicht aktualisiert.
Es müssen 142 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 292 MB Plattenplatz zusätzlich benutzt.
Holen: 1 http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm/ trusty-cm5/contrib oracle-j2sdk1.7 amd64 1.7.0+update67-1 [142 MB]
Es wurden 142 MB in 1 min 53 s geholt (1,257 kB/s).
Vormals nicht ausgewähltes Paket oracle-j2sdk1.7 wird gewählt.
(Lese Datenbank ... 199345 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../oracle-j2sdk1.7_1.7.0+update67-1_amd64.deb ...
Entpacken von oracle-j2sdk1.7 (1.7.0+update67-1) ...
oracle-j2sdk1.7 (1.7.0+update67-1) wird eingerichtet ... 4.install-cloudera-manager-server.log Paketlisten werden gelesen...
Abhängigkeitsbaum wird aufgebaut....
Statusinformationen werden eingelesen....
Die folgenden zusätzlichen Pakete werden installiert:
cloudera-manager-daemons
Die folgenden NEUEN Pakete werden installiert:
cloudera-manager-daemons cloudera-manager-server
0 aktualisiert, 2 neu installiert, 0 zu entfernen und 2 nicht aktualisiert.
Es müssen 512 MB an Archiven heruntergeladen werden.
Nach dieser Operation werden 771 MB Plattenplatz zusätzlich benutzt.
Holen: 1 http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm/ trusty-cm5/contrib cloudera-manager-daemons all 5.6.0-1.cm560.p0.54~trusty-cm5 [512 MB]
Holen: 2 http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm/ trusty-cm5/contrib cloudera-manager-server all 5.6.0-1.cm560.p0.54~trusty-cm5 [7,590 B]
Es wurden 512 MB in 43 s geholt (11.7 MB/s).
Vormals nicht ausgewähltes Paket cloudera-manager-daemons wird gewählt.
(Lese Datenbank ... 201471 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../cloudera-manager-daemons_5.6.0-1.cm560.p0.54~trusty-cm5_all.deb ...
Entpacken von cloudera-manager-daemons (5.6.0-1.cm560.p0.54~trusty-cm5) ...
Vormals nicht ausgewähltes Paket cloudera-manager-server wird gewählt.
Vorbereitung zum Entpacken von .../cloudera-manager-server_5.6.0-1.cm560.p0.54~trusty-cm5_all.deb ...
Entpacken von cloudera-manager-server (5.6.0-1.cm560.p0.54~trusty-cm5) ...
Trigger für ureadahead (0.100.0-16) werden verarbeitet ...
ureadahead will be reprofiled on next reboot
cloudera-manager-daemons (5.6.0-1.cm560.p0.54~trusty-cm5) wird eingerichtet ...
cloudera-manager-server (5.6.0-1.cm560.p0.54~trusty-cm5) wird eingerichtet ...
Adding system startup for /etc/init.d/cloudera-scm-server ...
/etc/rc0.d/K10cloudera-scm-server -> ../init.d/cloudera-scm-server
/etc/rc1.d/K10cloudera-scm-server -> ../init.d/cloudera-scm-server
/etc/rc6.d/K10cloudera-scm-server -> ../init.d/cloudera-scm-server
/etc/rc2.d/S90cloudera-scm-server -> ../init.d/cloudera-scm-server
/etc/rc3.d/S90cloudera-scm-server -> ../init.d/cloudera-scm-server
/etc/rc4.d/S90cloudera-scm-server -> ../init.d/cloudera-scm-server
/etc/rc5.d/S90cloudera-scm-server -> ../init.d/cloudera-scm-server
Trigger für ureadahead (0.100.0-16) werden verarbeitet ... 5.install-cloudera-manager-server-db-2.log Paketlisten werden gelesen...
Abhängigkeitsbaum wird aufgebaut....
Statusinformationen werden eingelesen....
Die folgenden zusätzlichen Pakete werden installiert:
libpq5 postgresql postgresql-9.3 postgresql-client-9.3
postgresql-client-common postgresql-common
Vorgeschlagene Pakete:
oidentd ident-server locales-all postgresql-doc-9.3
Die folgenden NEUEN Pakete werden installiert:
cloudera-manager-server-db-2 libpq5 postgresql postgresql-9.3
postgresql-client-9.3 postgresql-client-common postgresql-common
0 aktualisiert, 7 neu installiert, 0 zu entfernen und 2 nicht aktualisiert.
Es müssen 3,694 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 15.6 MB Plattenplatz zusätzlich benutzt.
Holen: 1 http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm/ trusty-cm5/contrib cloudera-manager-server-db-2 all 5.6.0-1.cm560.p0.54~trusty-cm5 [9,082 B]
Holen: 2 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main libpq5 amd64 9.3.11-0ubuntu0.14.04 [80.6 kB]
Holen: 3 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main postgresql-client-common all 154ubuntu1 [25.4 kB]
Holen: 4 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main postgresql-client-9.3 amd64 9.3.11-0ubuntu0.14.04 [783 kB]
Holen: 5 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main postgresql-common all 154ubuntu1 [103 kB]
Holen: 6 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main postgresql-9.3 amd64 9.3.11-0ubuntu0.14.04 [2,688 kB]
Holen: 7 http://de.archive.ubuntu.com/ubuntu/ trusty-updates/main postgresql all 9.3+154ubuntu1 [5,038 B]
Vorkonfiguration der Pakete ...
Es wurden 3,694 kB in 2 s geholt (1,722 kB/s).
Vormals nicht ausgewähltes Paket libpq5 wird gewählt.
(Lese Datenbank ... 208390 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../libpq5_9.3.11-0ubuntu0.14.04_amd64.deb ...
Entpacken von libpq5 (9.3.11-0ubuntu0.14.04) ...
Vormals nicht ausgewähltes Paket postgresql-client-common wird gewählt.
Vorbereitung zum Entpacken von .../postgresql-client-common_154ubuntu1_all.deb ...
Entpacken von postgresql-client-common (154ubuntu1) ...
Vormals nicht ausgewähltes Paket postgresql-client-9.3 wird gewählt.
Vorbereitung zum Entpacken von .../postgresql-client-9.3_9.3.11-0ubuntu0.14.04_amd64.deb ...
Entpacken von postgresql-client-9.3 (9.3.11-0ubuntu0.14.04) ...
Vormals nicht ausgewähltes Paket postgresql-common wird gewählt.
Vorbereitung zum Entpacken von .../postgresql-common_154ubuntu1_all.deb ...
»Umleitung von /usr/bin/pg_config zu /usr/bin/pg_config.libpq-dev durch postgresql-common« wird hinzugefügt
Entpacken von postgresql-common (154ubuntu1) ...
Vormals nicht ausgewähltes Paket postgresql-9.3 wird gewählt.
Vorbereitung zum Entpacken von .../postgresql-9.3_9.3.11-0ubuntu0.14.04_amd64.deb ...
Entpacken von postgresql-9.3 (9.3.11-0ubuntu0.14.04) ...
Vormals nicht ausgewähltes Paket postgresql wird gewählt.
Vorbereitung zum Entpacken von .../postgresql_9.3+154ubuntu1_all.deb ...
Entpacken von postgresql (9.3+154ubuntu1) ...
Vormals nicht ausgewähltes Paket cloudera-manager-server-db-2 wird gewählt.
Vorbereitung zum Entpacken von .../cloudera-manager-server-db-2_5.6.0-1.cm560.p0.54~trusty-cm5_all.deb ...
Entpacken von cloudera-manager-server-db-2 (5.6.0-1.cm560.p0.54~trusty-cm5) ...
Trigger für man-db (2.6.7.1-1ubuntu1) werden verarbeitet ...
Trigger für ureadahead (0.100.0-16) werden verarbeitet ...
libpq5 (9.3.11-0ubuntu0.14.04) wird eingerichtet ...
postgresql-client-common (154ubuntu1) wird eingerichtet ...
postgresql-client-9.3 (9.3.11-0ubuntu0.14.04) wird eingerichtet ...
update-alternatives: /usr/share/postgresql/9.3/man/man1/psql.1.gz wird verwendet, um /usr/share/man/man1/psql.1.gz (psql.1.gz) im Auto-Modus bereitzustellen
postgresql-common (154ubuntu1) wird eingerichtet ...
Benutzer postgres wird zur Gruppe ssl-cert hinzugefügt.
Creating config file /etc/logrotate.d/postgresql-common with new version
Building PostgreSQL dictionaries from installed myspell/hunspell packages...
de_at
de_ch
de_de
en_au
en_ca
en_gb
en_us
en_za
Removing obsolete dictionary files:
* No PostgreSQL clusters exist; see "man pg_createcluster"
Trigger für ureadahead (0.100.0-16) werden verarbeitet ...
postgresql-9.3 (9.3.11-0ubuntu0.14.04) wird eingerichtet ...
Creating new cluster 9.3/main ...
config /etc/postgresql/9.3/main
data /var/lib/postgresql/9.3/main
locale de_DE.UTF-8
port 5432
update-alternatives: /usr/share/postgresql/9.3/man/man1/postmaster.1.gz wird verwendet, um /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) im Auto-Modus bereitzustellen
* Starting PostgreSQL 9.3 database server
...done.
postgresql (9.3+154ubuntu1) wird eingerichtet ...
cloudera-manager-server-db-2 (5.6.0-1.cm560.p0.54~trusty-cm5) wird eingerichtet ...
Adding system startup for /etc/init.d/cloudera-scm-server-db ...
/etc/rc0.d/K11cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
/etc/rc1.d/K11cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
/etc/rc6.d/K11cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
/etc/rc2.d/S79cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
/etc/rc3.d/S79cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
/etc/rc4.d/S79cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
/etc/rc5.d/S79cloudera-scm-server-db -> ../init.d/cloudera-scm-server-db
Trigger für libc-bin (2.19-0ubuntu6.7) werden verarbeitet ...
Trigger für ureadahead (0.100.0-16) werden verarbeitet ... 6.start-embedded-db.log /usr/share/cmf/bin/initialize_embedded_db.sh: Zeile 275: let: Unable to find amount of RAM on the system, giving up: Syntaxfehler im Ausdruck. (Fehlerverursachendes Zeichen ist »to find amount of RAM on the system, giving up«).
/usr/share/cmf/bin/initialize_embedded_db.sh: Zeile 276: [: : Ganzzahliger Ausdruck erwartet.
pg_ctl: could not start server
Examine the log output. 🙂 Klaus
... View more
03-01-2016
07:34 AM
Hello Cy, thank you for your quick response. The problem still exists while running the VM with 6 GB RAM. 🙂 Klaus
... View more
02-29-2016
04:59 AM
I downloaded the cloudera-manager-installer.bin today using the quick start instructions. wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin When I run it fails with the following error. Installation failed. cloudera-manager-server installation failed. See /var/log/cloudera-manager-installer/ 6.start-embedded-db.log /usr/share/cmf/bin/initialize_embedded_db.sh: Line 275: let: Unable to find amount of RAM on the system, giving up: Syntax error in expression. (Charachter in error »to find amount of RAM on the system, giving up«).
/usr/share/cmf/bin/initialize_embedded_db.sh: Line 276: [: : Expect integer expression.
pg_ctl: could not start server
Examine the log output. Examine /usr/share/cmf/bin the script was not found: drwxr-xr-x 2 root root 4096 Feb 29 13:13 .
drwxr-xr-x 25 root root 20480 Feb 29 13:13 ..
-rwxr-xr-x 1 root root 4286 Feb 16 19:50 cmf-server
-rwxr-xr-x 1 root root 3515 Feb 16 20:05 gen_credentials_ad.sh
-rwxr-xr-x 1 root root 972 Feb 16 20:05 gen_credentials.sh
-rwxr-xr-x 1 root root 997 Feb 16 20:05 gen_tgt.sh
-rwxr-xr-x 1 root root 2547 Feb 16 20:05 import_credentials.sh
-rwxr-xr-x 1 root root 366 Feb 16 20:05 merge_credentials.sh I'm using a fresh installation of Ubuntu Desktop 14.04 LTS x64 under VMWARE Player 12 with: 1 GB RAM 2 Processors 20 GB Harddisk Any ideas what the problem is? Regards Klaus
... View more