Member since
07-08-2013
548
Posts
59
Kudos Received
53
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2594 | 08-17-2019 04:05 PM | |
2563 | 07-26-2019 12:18 AM | |
8840 | 07-17-2019 09:20 AM | |
5032 | 06-18-2018 03:38 AM | |
12620 | 04-06-2018 07:13 AM |
01-26-2016
08:24 AM
Is your CDH deployed using parcels or packages? As for the binaries: - packages: can you list your packages ie RHEL/Centos # rpm -qa | grep -i cdh (or grep -i impa) - parcels: you should be able to find them under /opt/cloudera/parcels/CDH (CM should have symlink the necessary shell commands) - Is it only impala-shell that is failing, have you tested hadoop, hive etc? Can you link the documentation you've followed to perform the install.
... View more
01-25-2016
02:26 PM
1 Kudo
- What browser are you using, have you tried another browser? - May I ask you try performing a tcpdump on your CM host? 1. Open a terminal (ssh/putty) into your CM Server host
2. # ifconfig | grep -B1 "inet "
...
eth0 Link encap:Ethernet HWaddr DE:AD:BE:EF:DE:AD
inet addr:10.17.81.7 Bcast:10.17.81.255 Mask:255.255.254.0
...
my host IP is "inet addr: 10.17.81.7" on interface eth0 use these info for step 3. and 4.
3. # tcpdump -A -s 0 'tcp port 7180' -i eth0
4. Put side by side the terminal window and open your browser url: http://[ip-of-your-cm]:7180/cmf/login - does your terminal window show any output/traffic?
... View more
01-25-2016
08:41 AM
Troubleshooting tip: On the node where you have installed Cloudera Manager Server, check if it is actually running; [linux shell]# service cloudera-scm-server status
[linux shell]# /usr/java/jdk1.7.0_67-cloudera/bin/jcmd | grep "cmf.Main"
11082 com.cloudera.server.cmf.Main
... Check if your CM is listening on port 7180 [linux shell]# if (exec 6<>/dev/tcp/$(hostname)/7180) 2> /dev/null; then echo "CM _IS_ listening on port 7180"; fi
[linux shell]# netstat -lntup | grep LIST
...
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN 11082/java
...
If any of the above fails check your server log files [linux shell]# less /var/log/cloudera-scm-server/cloudera-scm-server.log > web UI shows blank page. Are you behind proxy?
... View more
01-25-2016
06:55 AM
Can you let me know the output of your .../parcels is, for example mine shows "items" : [ {
"product" : "CDH",
"version" : "5.5.1-1.cdh5.5.1.p0.11",
"stage" : "ACTIVATED",
"clusterRef" : {
"clusterName" : "example"
}
}, This is equivalen of having the following services (HIVE, HDFS ...) versions [1] "parcels": [
{
"parcelName": "CDH-5.5.1-1.cdh5.5.1.p0.11-xxxx.parcel",
"components": [
{
"pkg_version": "0.7.0+cdh5.5.1+0",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "bigtop-tomcat",
"version": "6.0.44-cdh5.5.1"
},
{
"pkg_version": "0.11.0+cdh5.5.1+77",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "crunch",
"version": "0.11.0-cdh5.5.1"
},
{
"pkg_version": "1.6.0+cdh5.5.1+29",
"pkg_release": "1.cdh5.5.1.p0.16",
"name": "flume-ng",
"version": "1.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop-0.20-mapreduce",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop-hdfs",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop-httpfs",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop-kms",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop-mapreduce",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "2.6.0+cdh5.5.1+924",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hadoop-yarn",
"version": "2.6.0-cdh5.5.1"
},
{
"pkg_version": "1.0.0+cdh5.5.1+274",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hbase",
"version": "1.0.0-cdh5.5.1"
},
{
"pkg_version": "1.5+cdh5.5.1+57",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "hbase-solr",
"version": "1.5-cdh5.5.1"
},
{
"pkg_version": "1.1.0+cdh5.5.1+327",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hive",
"version": "1.1.0-cdh5.5.1"
},
{
"pkg_version": "1.1.0+cdh5.5.1+327",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "hive-hcatalog",
"version": "1.1.0-cdh5.5.1"
},
{
"pkg_version": "3.9.0+cdh5.5.1+333",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "hue",
"version": "3.9.0-cdh5.5.1"
},
{
"pkg_version": "2.3.0+cdh5.5.1+0",
"pkg_release": "1.cdh5.5.1.p0.17",
"name": "impala",
"version": "2.3.0-cdh5.5.1"
},
{
"pkg_version": "1.0.0+cdh5.5.1+116",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "kite",
"version": "1.0.0-cdh5.5.1"
},
{
"pkg_version": "1.0.0+cdh5.5.1+0",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "llama",
"version": "1.0.0-cdh5.5.1"
},
{
"pkg_version": "0.9+cdh5.5.1+26",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "mahout",
"version": "0.9-cdh5.5.1"
},
{
"pkg_version": "4.1.0+cdh5.5.1+223",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "oozie",
"version": "4.1.0-cdh5.5.1"
},
{
"pkg_version": "1.5.0+cdh5.5.1+176",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "parquet",
"version": "1.5.0-cdh5.5.1"
},
{
"pkg_version": "0.12.0+cdh5.5.1+72",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "pig",
"version": "0.12.0-cdh5.5.1"
},
{
"pkg_version": "1.5.1+cdh5.5.1+106",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "sentry",
"version": "1.5.1-cdh5.5.1"
},
{
"pkg_version": "4.10.3+cdh5.5.1+325",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "solr",
"version": "4.10.3-cdh5.5.1"
},
{
"pkg_version": "1.5.0+cdh5.5.1+94",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "spark",
"version": "1.5.0-cdh5.5.1"
},
{
"pkg_version": "1.99.5+cdh5.5.1+33",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "sqoop2",
"version": "1.99.5-cdh5.5.1"
},
{
"pkg_version": "1.4.6+cdh5.5.1+29",
"pkg_release": "1.cdh5.5.1.p0.14",
"name": "sqoop",
"version": "1.4.6-cdh5.5.1"
},
{
"pkg_version": "0.9.0+cdh5.5.1+17",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "whirr",
"version": "0.9.0-cdh5.5.1"
},
{
"pkg_version": "3.4.5+cdh5.5.1+91",
"pkg_release": "1.cdh5.5.1.p0.15",
"name": "zookeeper",
"version": "3.4.5-cdh5.5.1"
}
], http://archive.cloudera.com/cdh5/parcels/5.5.1.11/manifest.json
... View more
01-25-2016
03:47 AM
> How can i get version of various services (like Hive, Impala etc) using Cloudera Manager API. This is not exposed in the CM API -- however you can associate the version of various services via either going to their respective web console, or if you would like to use CM API you can check by viewing the ACTIVATED Parcel [1] and associate it with the CDH release version [2] or against the manifest.json [3]. Alternative, through CM web UI you can execute Host Inspector (CM> Hosts> [Host Inspector]) [1] http://[your-cm-server]:7180/api/v11/clusters/[cluster-name]/parcels [2] http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_vd_cdh_package_tarball.html [3] example for latest CDH5 http://archive.cloudera.com/cdh5/parcels/latest/manifest.json
... View more
01-22-2016
05:25 AM
1 Kudo
> Most of them, I do not found them in CM even with the good prefixes (in their corresponding service). Thing to note, Cloudera Manager team will update the configuration to match what is supported in CDH (HDFS, HBASE etc). If a parameter is not exposed in your CM it may be that your CM is dated or that property is not yet avalable/supported for the respective service. For example "dfs.namenode.avoid.read.stale.datanode" is exposed in CM 5.5.1 but not in CM 5.3.x
... View more
01-22-2016
02:52 AM
2 Kudos
The below listed configuration appears to have no prefix such as dfs. hbase., by adding these I could see that they do exist in Cloudera Manager configuration. If there are some missing you can utilise the Custom Configuration [1] feature that CM provides for such cases. Below are missing context, you should prefix them with dfs. namenode.avoid.read.stale.datanode = true namenode.avoid.write.stale.datanode = true namenode.stale.datanode.interval = 30000 client.read.shortcircuit.buffer.size = 131072 regionserver.checksum.verify = true When you upgrade to CDH 5, HBase checksums are now turned on by default. In CDH 4 you will have to modify HBASE safety-valve see [2] This is context for core-site.xml, prefix with ipc. for HBASE use hbase. see [3], you will need to add these in the configuration safety-valve in the respective service. server.tcpnodelay = true client.tcpnodelay = true These have also missing prefix hbase. see [4] hregion.majorcompaction.jitter = 0.5 (½ week, default) hstore.min.locality.to.skip.major.compact master.wait.on.regionservers.timeout ipc.client.tcpnodelay zookeeper.useMulti [5] [1] http://www.cloudera.com/documentation/enterprise/latest/topics/cm_mc_config_snippet.html [2] http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hbase_new_features_and_changes.html [3] http://archive-primary.cloudera.com/cdh5/cdh/5/hbase-1.0.0-cdh5.5.1/book.html#_ipc_configuration_conflicts_with_hadoop [4] http://archive-primary.cloudera.com/cdh5/cdh/5/hbase-1.0.0-cdh5.5.1/book.html [5] CDH 5.5.1 comes with Zookeeper 3.4.5 (this is only applicable in conjuction of Zookeeper 3.5)
... View more
01-05-2016
04:42 AM
2 Kudos
you will need to pass the content of the "/home/ec2-user/.ssh/id_rsa". Example: id_rsa = '' with open("/home/ec2-user/.ssh/id_rsa", 'r') as f: id_rsa = f.read() cmd = cm.host_install(host_username, host_list, private_key=id_rsa, cm_repo_url=cm_repo_url)
... View more
11-27-2015
01:36 AM
it could be that the file is created (locked) by other user than cloudera-scm - have you attempted to delete and restart the cloudera-scm-server-db ?
... View more
11-09-2015
02:27 PM
Happy to say that our in documentation we have noted that when a DataNode is decommissioned, the data blocks are not removed from the storage directories. You must delete the data manually.
... View more
- « Previous
- Next »