Member since
08-15-2016
24
Posts
1
Kudos Received
0
Solutions
12-27-2019
11:36 PM
I would like to change my email as well.
... View more
11-17-2016
04:33 PM
Have you found a solution to this? I am facing the exact same issue. CDH 5.8, Kerberos with Single User Mode.
... View more
11-17-2016
03:34 PM
I have a Single User Mode installation of the Cloudera Manager which works fine. But after I enabled Kerberos when the cluster is restarted, the HDFS fails with:
Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP. Using privileged resources in combination with SASL RPC data transfer protection is not supported.
Details:
STARTUP_MSG: build = http://github.com/cloudera/hadoop -r 042da8b868a212c843bcbf3594519dd26e816e79; compiled by 'jenkins' on 2016-07-12T23:03Z
STARTUP_MSG: java = 1.8.0_60
************************************************************/
2016-11-17 15:30:21,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-11-17 15:30:22,730 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/my.node.one.com@EXAMPLE.COM using keytab file hdfs.keytab
2016-11-17 15:30:22,897 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-11-17 15:30:22,991 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-11-17 15:30:22,991 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2016-11-17 15:30:22,997 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2016-11-17 15:30:23,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: File descriptor passing is enabled.
2016-11-17 15:30:23,001 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is my.node.one.com
2016-11-17 15:30:23,009 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP. Using privileged resources in combination with SASL RPC data transfer protection is not supported.
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1239)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1139)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:454)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2438)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2325)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2372)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2549)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2573)
2016-11-17 15:30:23,021 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-11-17 15:30:23,039 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at my.node.one.com/12.234.12.134
************************************************************/
I don't see anything in the documentation specific to HDFS.
The "dfs.datanode.http.address" is set to 1006
The "dfs.datanode.address" is set to 1004
The "dfs.data.transfer.protection" is empty.
Seems like this is due to lower number ports not being accessible to non-root users.
... View more
Labels:
11-17-2016
10:41 AM
1 Kudo
Is there any estimate on when the fix will be in? I needed this since a couple of days back 😞
... View more
11-17-2016
10:21 AM
No, I am not saying that cloudera-manager-installer.bin installing latest version of CM server is a bug. I am saying my original problem of Installing the default single user mode provided by Installation path A in version 5.9 may be a bug, because until now I have never had any issues setting up SUM using path A with versions less than 5.9. The CM Agent 5.9 complains with the error as in my original post while inspecting the hosts. I could never get past that step. CM Server does work with CDH 5.8 but does not with CM Agent 5.8.
... View more
11-17-2016
09:16 AM
Seems like this is a bug with cloudera-scm-agent 5.9. In the installation path A, there is no way to choose a specific version of the cloudera server and agent, and by default even the installer for 5.8.0 version installs server and agent vesion 5.9. Choosing a custom repo for 5.8 during the installation complains that the agent should be v 5.9 since thatis the server version, so that does not work.
... View more
11-16-2016
07:45 PM
Any suggestions? I am still stuck here, despite following all the instructions on the documentation.
... View more
11-15-2016
07:44 PM
I am trying to install Cloudera Manager using Installation Path A with the default Single user mode by following instructions on: https://www.cloudera.com/documentation/enterprise/5-8-x/topics/install_singleuser_reqts.html#xd_583c10bfdbd326ba--69adf108-1492ec0ce48--7ade https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_ig_install_path_a.html#concept_vk4_psb_zm__section_idl_k1d_d5 The installation goes fine until it downloads the parcels. After that when it tries to inspect hosts, I see that the host inspection fails on all hosts. In the cloudera-scm-agent.log, I see this: [15/Nov/2016 19:00:47 +0000] 3093 MainThread agent INFO Active parcel list updated; recalculating component info.
[15/Nov/2016 19:00:47 +0000] 3093 MainThread parcel INFO Loading parcel manifest for: CDH-5.8.0-1.cdh5.8.0.p0.42
[15/Nov/2016 19:02:01 +0000] 3093 CP Server Thread-7 _cplogging INFO 172.16.165.191 - - [15/Nov/2016:19:02:01] "GET /heartbeat HTTP/1.1" 200 2 "" "NING/1.0"
[15/Nov/2016 19:02:01 +0000] 3093 MainThread util INFO Using generic audit plugin for process cluster-host-inspector
[15/Nov/2016 19:02:01 +0000] 3093 MainThread util INFO Creating metadata plugin for process cluster-host-inspector
[15/Nov/2016 19:02:01 +0000] 3093 MainThread util INFO Using specific metadata plugin for process cluster-host-inspector
[15/Nov/2016 19:02:01 +0000] 3093 MainThread util INFO Using generic metadata plugin for process cluster-host-inspector
[15/Nov/2016 19:02:01 +0000] 3093 MainThread agent INFO [1-cluster-host-inspector] Instantiating process
[15/Nov/2016 19:02:01 +0000] 3093 MainThread process INFO [1-cluster-host-inspector] Updating process: True {}
[15/Nov/2016 19:02:01 +0000] 3093 MainThread agent ERROR Failed to activate {u'refresh_files': [], u'config_generation': 0, u'auto_restart': False, u'running': True, u'required_tags': [], u'one_off': True, u'special_file_info': [], u'group': u'root', u'id': 1, u'status_links': {}, u'name': u'cluster-host-inspector', u'extra_groups': [], u'run_generation': 1, u'start_timeout_seconds': None, u'environment': {}, u'optional_tags': [], u'program': u'mgmt/mgmt.sh', u'arguments': [u'inspector', u'input.json', u'output.json', u'DEFAULT'], u'parcels': {}, u'resources': [], u'user': u'root'}
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 1685, in handle_heartbeat_processes
new_process.update_heartbeat(raw, True)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/process.py", line 282, in update_heartbeat
self.fs_update()
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/process.py", line 383, in fs_update
raise Exception("Non-root agent cannot execute process as user '%s'" % user)
Exception: Non-root agent cannot execute process as user 'root' What could I be missing?
... View more
Labels:
11-14-2016
03:30 PM
Is there a way to filter the list of hosts that are shown to the user for role assignments while adding a service (as a csd) through the Cloudera Manager in the wizard?
... View more
Labels:
11-07-2016
10:37 PM
I have a custom Parcel/CSD which I need to install on a kerberized cluster. I have followed the Cloudera documentation and added the principal in the CSD and the ticket is correctly acquired for my user. This works fine for HDFS, however fails for HBASE with the error:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=testuser/my.instance.com@EXAMPLE.COM, scope=default, params=[namespace=default,table=default:TEST,family=|NAME|AGE],action=CREATE)
How do I grant the required hbase-access to my user programatically through the CSD? Is there a way I can retrieve the HBASE-keytab along with the HBASE-principal programatically in my control script or through the service descriptor and call kinit on it?
... View more
Labels:
10-28-2016
02:49 PM
Yes indeed! Thanks.
... View more
10-28-2016
11:27 AM
Since the oozie-site.xml is generated at runtime (in the /run/process/...) folder, is there a way to download the latest version of this file using the cloudera-manager api? If not, is there any other way to get this file?
... View more
10-25-2016
02:36 PM
How to I add a value the following properties using the Cloudera Manager rest api. I can see these values in the respective service configuration in the UI, but when I do a rest call using curl -X GET -u "admin:admin" -i http://<server>:7180/api/v13/clusters/Cluster1/services/<servicename>/config?view=full // used "oozie" or "hbase" as servicenames. I see all other properties, but not these particular settings: Hbase service: 1. HBase Client Advanced Configuration Snippet (Safety Valve) for hbase-site.xml 2. RegionServer Logging Advanced Configuration Snippet (Safety Valve) 3. HBase Client Environment Advanced Configuration Snippet (Safety Valve) for hbase-env.sh Oozie Service: 1. Oozie Server Advanced Configuration Snippet (Safety Valve) for oozie-site.xml
... View more
09-30-2016
11:16 AM
Thanks @dspivak. One last question: I see that clusterdock created 2 nodes. Are these virtual nodes? My cluster itself has 2 nodes, each has 24 GB memory and 8 Cores. So: 1) How do I know if each of my physical node is being used as a clusterdock node. 2) Is there a way to map them? 3) Is there a way to create more than 2 nodes using clusterdock? Again, thanks for your help!
... View more
09-29-2016
10:20 PM
So does that mean I cannot change the JDK version (similar to a full install) over here?
... View more
09-29-2016
05:04 PM
So is there a way to use JDK 1.8 with the Clusterdock setup?
... View more
09-22-2016
05:52 PM
Thanks for the info! Are there detailed steps for installation? I tried it, and it installs a container, which if I run starts a python shell. It would be good to have some detailed installation documentation on this.
... View more
09-20-2016
02:35 PM
I noticed that whenever I install the docker quickstart, run cloudera manager on it and then start the cluster, I see that the following services are red:
1) Host:
Clock Offset
2) HBase:
Bad : Master summary: quickstart.cloudera (Availability: Active, Health: Bad). This health test reflects the health of the active Master.
Bad : This RegionServer is not connected to its cluster.
Bad : This role's process exited. This role is supposed to be started.
3) HDFS:
Bad : NameNode summary: quickstart.cloudera (Availability: Active, Health: Bad). This health test reflects the health of the active NameNode.
4) YARN:
Bad : ResourceManager summary: quickstart.cloudera (Availability: Active, Health: Bad). This health test reflects the health of the active ResourceManager.
My virtual machine is running Ubuntu 14.04, has 24 GB memory, 8 cores, 250 GB Storage. I am using the latest docker cloudera image using
docker pull cloudera/quickstart:latest
Also, I am installing JDK 1.8 before I run
/home/cloudera/cloudera-manager --enterprise
To install the JDK: I set the JAVA_HOME variable to the new path, upgrade /etc/profile to reflect the change and also add it to /etc/default/bigtop-utils. Once the Cloudera Manager is up, I update the Hosts Configuration with this change as well.
Currently the latest docker image has CDH 5.7.
What can I do to have all services in green upon running Cloudera quickstart?
... View more
Labels:
09-15-2016
12:00 PM
Hadoop is installed. A simple "hadoop version" yields this: Hadoop 2.6.0-cdh5.8.0
Subversion http://github.com/cloudera/hadoop -r 042da8b868a212c843bcbf3594519dd26e816e79
Compiled by jenkins on 2016-07-12T23:02Z
Compiled with protoc 2.5.0
From source with checksum 2b6c31ecc19f118d6e1c822175716b5
This command was run using /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/jars/hadoop-common-2.6.0-cdh5.8.0.jar Also, in the last lines it clearly mentions the hadoop-common.jar which is also the jar containg the error I am getting mentioned in the question. Any other suggestions?
... View more
09-14-2016
03:42 PM
I have installed Cloudera Manager using the Single User Mode with Installation Path A, using JDK 1.8. The installation went fine, though when I try to run "spark-shell", I get the error "java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream" as below: org.apache.spark.launcher.app.Driver I Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
org.apache.spark.launcher.app.Driver I at org.apache.spark.deploy.SparkSubmitArguments.handle(SparkSubmitArguments.scala:394)
org.apache.spark.launcher.app.Driver I at org.apache.spark.launcher.SparkSubmitOptionParser.parse(SparkSubmitOptionParser.java:163)
org.apache.spark.launcher.app.Driver I at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:97)
org.apache.spark.launcher.app.Driver I at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:114)
org.apache.spark.launcher.app.Driver I at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
org.apache.spark.launcher.app.Driver I Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
org.apache.spark.launcher.app.Driver I at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
org.apache.spark.launcher.app.Driver I at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
org.apache.spark.launcher.app.Driver I at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
org.apache.spark.launcher.app.Driver I at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
org.apache.spark.launcher.app.Driver I ... 5 more I tried sourcing /etc/conf/spark/spark-env.sh and /etc/hadoop/conf/hadoop-env.sh manually, but did not help. What can I do to resolve this issue? Since I have installed Cloudera in the Single User Mode, is there anything additional that I need to do here?
... View more
08-26-2016
04:07 PM
Thanks! I did see that, thought just wanted to make sure because after I run the installer bin on the main machine, the "cloudera-scm" user is created over there. Rest of the machines/hosts, I needed to create this user by myself manually and I was wondering if I was doing the right thing.
... View more
08-26-2016
03:35 PM
In the single user mode, do I need to create the user on all the machines/hosts in the cluster, or just the single host where I am installing the cloudera installer?
... View more
08-24-2016
08:29 PM
I am trying to install Cloudera Manager in a single user mode on Redhat 6.7. I have followed the instructions on this page: http://www.cloudera.com/documentation/enterprise/5-5-x/topics/install_singleuser_reqts.html for the user cloudera-scm. The process goes fine until the very last step where it fails while Deploying Client Configuration. The error in /var/log/cloudera-scm-server/cloudera-scm-server.log is as below: 2016-08-24 20:05:44,721 INFO metric-schema-updater:com.cloudera.cmon.components.MetricSchemaManager: Metric schema updated in PT4.181S.
2016-08-24 20:05:45,357 INFO metric-schema-updater:com.cloudera.cmon.MetricSchema: Switching metrics schema from If3DeUdFxVdGRlIA-zLyIC6G-o4 to GM94Fu2WwuQyRpjev21gQ5LIjbk
2016-08-24 20:05:48,585 INFO CMMetricsForwarder-0:com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder: Failed to send metrics.
java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy120.writeMetrics(Unknown Source)
at com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder.sendWithAvro(ClouderaManagerMetricsForwarder.java:325)
at com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder.sendMetrics(ClouderaManagerMetricsForwarder.java:312)
at com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder.run(ClouderaManagerMetricsForwarder.java:146)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.AvroRemoteException: java.net.ConnectException: Connection refused
at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:88)
... 11 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1091)
at org.apache.avro.ipc.HttpTransceiver.writeBuffers(HttpTransceiver.java:71)
at org.apache.avro.ipc.Transceiver.transceive(Transceiver.java:58)
at org.apache.avro.ipc.Transceiver.transceive(Transceiver.java:72)
at org.apache.avro.ipc.Requestor.request(Requestor.java:147)
at org.apache.avro.ipc.Requestor.request(Requestor.java:101)
at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:72)
... 11 more The installation works fine if I don't choose the Single User Mode installation. What am I missing? What does the above error indicate?
... View more
08-16-2016
03:21 PM
I have added a custom service through a Parcel+CSD. Is there any way that I can provide additional options which the "supervisor" can use while starting or stopping the process of my service (either through the CSD or Parcel or something else) ? Specifically, I wan't to pass "stopasgroup=true" while stopping the service
... View more
08-15-2016
11:48 AM
I have a Parcel/CSD where I am using IBM Websphere Liberty Profile (WLP) Server to start a service. The WLP Server is a part of the Parcel, and the Parcel defines script exports the location of the bin directory of the WLP Server. The CSD defines a single role to start the server using this path to the bin directory obtrained through the Parcel. There are no other roles defined in the SDL. The server is started through the CSD in the foreground using $WLP_BIN/server run (http://www.ibm.com/support/knowledgecenter/SSAW57_liberty/com.ibm.websphere.wlp.nd.doc/ae/twlp_admin_script.html). The service is added correctly, starts successfully and I can access the server just fine. So far so good. Now when I try to stop the service, without a stopRunner defined in the SDL, the Cloudera Manager shows that the service has stopped succesfully, but I can still access the server. When I try to stop the service with a stopRunner defined in the SDL, the server stops fine, but I stop-service screen shows status as failed with the details as below: Since I have only one role in my service, there is no other role to be shutdown. Ideally no stopRunner should be necessary, but if I don't use it, Cloudera Manager is not stopping the server despite showing the service as stopped. With the stopRunner, the server is stopped but Cloudera Manager shows an error as above. Can any one advice on this?
... View more