Member since
07-08-2013
547
Posts
59
Kudos Received
53
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1310 | 08-17-2019 04:05 PM | |
1551 | 07-26-2019 12:18 AM | |
5286 | 07-17-2019 09:20 AM | |
3398 | 06-18-2018 03:38 AM | |
7513 | 04-06-2018 07:13 AM |
05-20-2020
10:07 AM
rpm -qlp http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15/RPMS/x86_64/cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm | grep scm_prepare_database.sh I've noticed that the url you've given is /5.15/ that redirects to the latest maintenance release of the CM 5.15 which is 5.15.2. And you've given file "cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm" 5.15.0 which is not present in that folder. For your rpm -qlp to work try the /5.15.0/ rpm -qlp http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15.0/RPMS/x86_64/cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm | grep scm_prepare_database.sh
... View more
08-17-2019
04:05 PM
1 Kudo
This was reported as a bug, and has already been fixed in CM 6.3.0, 6.2.1 as part of OPSAPS-49111
... View more
07-26-2019
02:18 PM
> Is there any option to find empty directory using HDFS command Directly? You can get a list/find empty directories using the 'org.apache.solr.hadoop.HdfsFindTool'. And using the hdfs tool to check/test if _a_ directory is empty, you can use -du or -test; please see the FileSystemShell [0] test
Usage: hadoop fs -test -[defsz] URI
Options:
-d: f the path is a directory, return 0.
-e: if the path exists, return 0.
-f: if the path is a file, return 0.
-s: if the path is not empty, return 0.
-r: if the path exists and read permission is granted, return 0.
-w: if the path exists and write permission is granted, return 0.
-z: if the file is zero length, return 0.
Example:
hadoop fs -test -e filename du
Usage: hadoop fs -du [-s] [-h] [-x] URI [URI ...]
Displays sizes of files and directories contained in the given directory or the length of a file in case its just a file.
Options:
The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files. Without the -s option, calculation is done by going 1-level deep from the given path.
The -h option will format file sizes in a “human-readable” fashion (e.g 64.0m instead of 67108864)
The -x option will exclude snapshots from the result calculation. Without the -x option (default), the result is always calculated from all INodes, including all snapshots under the given path.
The du returns three columns with the following format:
size disk_space_consumed_with_all_replicas full_path_name
Example:
hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error. [0] https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html
... View more
07-26-2019
12:18 AM
@ADingmando you have a use case, or you have any particular issue that you'd like assistance with? The ODBC Drivers are built for CDH and HDP release. I believe there should be a release notes if you unpack the contents and compare the fixes each ODBC driver contains.
... View more
07-24-2019
04:37 AM
> Note:-only want want delete zero size dir ,not want to delete data conatains dir. One of the idea involves two step process, to generate an empty directory listing and then take the listing and -rmdir # generate the empty directory listing $ hadoop jar /opt/cloudera/parcels/CDH-*/jars/search-mr-*-job.jar org.apache.solr.hadoop.HdfsFindTool -find / -type d -empty
# produces output ... hdfs://ns1/user/impala hdfs://ns1/user/spark/applicationHistory hdfs://ns1/user/spark/spark2ApplicationHistory hdfs://ns1/user/sqoop2 ... # OPTIONAL: pick a dir and confirm that the dir is empty eg: $ hdfs dfs -du -s /user/impala 0 0 /user/impala # remove the empty dir eg: /user/impala # hdfs dfs -rmdir /user/impala
https://www.cloudera.com/documentation/enterprise/5-16-x/topics/search_hdfsfindtool.html
... View more
07-17-2019
09:20 AM
1 Kudo
> few days ago my CM started successfully. But, after restarted my server, it raised that error when I started. Unfortunately without the logs, or stack trace I won't be able to assess what caused the error. Back to the intermixed "scm" and "metastore" database issue; > "Now I tried to drop all the databases in mysql and create again, then run the following command respectively :" I think there may have been some misunderstanding, about the intention of the script "/opt/cloudera/cm/schema/scm_prepare_database.sh". The script as the documenatation state, is to create and configure a database for iteself [Cloudera Manager Server] [0] As seen in your comment, you've used for various services (amon, rman, ... metastore, ... etc) Please do check in your RDBMS (MySQL), and confirm which database name CM should be connecting to. eg: mysql> show databases; and for each database confirm which one has the correct schema with the "CM_VERSION" table in. mysql> show tables; or something like this; SELECT table_name, table_schema AS dbname
FROM INFORMATION_SCHEMA.TABLES
WHERE table_name='CM_VERSION' Note this is just a suggestion, to find the correct database where CM server can connect to. [0] https://www.cloudera.com/documentation/enterprise/latest/topics/prepare_cm_database.html#cmig_topic_5_2
... View more
07-16-2019
08:33 AM
Here's what I understand from the error; 1. I 'm using CDH-6.2.0-1.cdh6.2.0. and I use mysql as metastore. 2. Then it tells me as below:
2019-07-16 15:06:38,947 ERROR main:com.cloudera.server.cmf.Main: Server failed.
...
at com.cloudera.server.cmf.Main.main(Main.java:233)
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactoryBean': FactoryBean threw exception on object creation; nested exception is java.lang.RuntimeException: Unable to obtain CM release version.
...
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'metastore.CM_VERSION' doesn't exist
Cloudera Manager uses database [0], and the database name information is under /etc/cloudera-scm-server/db.properties, normally the database name should be different and not "metastore" Hive "metastore" also uses database [1], and the database configuration settings is stored in Cloudera Manger web UI, these configuration is saved in Cloudera Manager Database. Now some discrepancies 'metastore.CM_VERSION' (note metastore) and the "Tables_in_metastore" ( Mysql database is as below:) which you have provided, is listing the "Hive Metastore Database" tables, intermixed with "Cloudera Manager Database" tables. This is not ideal - as it is difficult to narrow down the issue when they happen. The "Tables_in_metastore" doesn't list the table name 'metastore.CM_VERSION' as you can see. Before we continue to look into why "CM_VERSION" table is not listed in the "metastore" database, few things to consider, was your CM server, always configured to connect to the "metastore" database in the past? Or was it connected to a different database (do # ls -ltr /etc/cloudera-scm-server/db.*) - check for db.properties backup open the files and check for the database name. And also are you aware of any recent changes in your cluster? Regards, Michalis [0] https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_installing_configuring_dbs.html#cmig_topic_5 [1] https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hive_metastore_configure.html#topic_18_4
... View more
06-18-2018
03:38 AM
1 Kudo
The location of "scm_prepare_database.sh" will change in CM 6.x, I will check with our documentation team as this may be a doc issue. For CM 5.x please use the default location, such as "/usr/share/cmf/schema/scm_prepare_database.sh" You can check this by listing the rpm contents. # rpm -qlp http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15/RPMS/x86_64/cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm | grep scm_prepare_database.sh
/usr/share/cmf/schema/scm_prepare_database.sh Let me know if this helps
... View more
04-10-2018
04:14 AM
@Youssef Process: 29977 ExecStart=/etc/init.d/cloudera-scm-server-db start (code=exited, status=1/FAILURE)
avril 06 15:05:06 SRVSTGHOST runuser[29987]: pam_unix(runuser:session): session opened for user cloudera-scm by (uid=0)
avril 06 15:05:06 SRVSTGHOST cloudera-scm-server-db[29977]: initdb : nom de locale invalide (« en_US.UTF8 ») The error _may be_ related to the embedded PostgreSQL our script is using, as this is _not_ a common message to see. The script (initialize_embedded_db.sh) attempts to initialise embedded PostgreSQL using the 'initdb' command [3] and the command is failing with the following message "initdb : nom de locale invalide (« en_US.UTF8 »)". - How is your Debian locale configured as [0]? - Have you made any locale changes recently [1]? - Also review the topic "22.1.3. Problems" in [2] using the steps I've provided below. e.g.: testing this command succeeded in my node - see [3] for what each parameter represent sudo -u cloudera-scm echo -n "password" > /tmp/generated_password.txt sudo -u cloudera-scm initdb --pgdata "/some_tmp/folder/to/test/data" --encoding=UTF8 --locale=en_US.UTF8 --auth=md5 --username="cloudera-scm" --pwfile="/tmp/generated_password.txt" [0] https://wiki.debian.org/Locale ; see " dpkg-reconfigure locales" [1] https://serverfault.com/a/491502 [2] https://www.postgresql.org/docs/9.5/static/locale.html [3] https://www.postgresql.org/docs/9.5/static/app-initdb.html
... View more
04-06-2018
07:13 AM
1 Kudo
Hi Sandy, + ktutil
+ for ENC in '"${ENC_ARR[@]}"'
+ echo 'addent -password -p cloudera-scm/REDACTED@PRICLUSTER.COM -k 1 -e aes256-cts:normal'
+ '[' 0 -eq 1 ']'
+ echo REDACTED
+ for ENC in '"${ENC_ARR[@]}"'
+ echo 'addent -password -p cloudera-scm/REDACTED@PRICLUSTER.COM -k 1 -e aes128-cts:normal'
+ '[' 0 -eq 1 ']'
+ echo REDACTED
+ for ENC in '"${ENC_ARR[@]}"'
+ echo 'addent -password -p cloudera-scm/REDACTED@PRICLUSTER.COM -k 1 -e des3-hmac-sha1:normal'
+ '[' 0 -eq 1 ']'
+ echo REDACTED
+ for ENC in '"${ENC_ARR[@]}"'
+ echo 'addent -password -p cloudera-scm/REDACTED@PRICLUSTER.COM -k 1 -e des-hmac-sha1:normal'
+ '[' 0 -eq 1 ']'
+ echo REDACTED
+ for ENC in '"${ENC_ARR[@]}"'
+ echo 'addent -password -p cloudera-scm/REDACTED@PRICLUSTER.COM -k 1 -e des-cbc-crc:normal'
+ '[' 0 -eq 1 ']'
+ echo REDACTED
+ echo 'wkt /var/run/cloudera-scm-server/cmf8091152271730902012.keytab'
addent: Bad encryption type while adding new entry
ktutil: Unknown request "REDACTED". Type "?" for a request list.
addent: Bad encryption type while adding new entry
ktutil: Unknown request "REDACTED". Type "?" for a request list.
addent: Bad encryption type while adding new entry
ktutil: Unknown request "REDACTED". Type "?" for a request list.
addent: Bad encryption type while adding new entry
ktutil: Unknown request "REDACTED". Type "?" for a request list.
addent: Bad encryption type while adding new entry
ktutil: Unknown request "REDACTED". Type "?" for a request list.
+ chmod 600 /var/run/cloudera-scm-server/cmf8091152271730902012.keytab
chmod: cannot access `/var/run/cloudera-scm-server/cmf8091152271730902012.keytab': No such file or directory Base on the above information, I've noticed that you have set the encryption in CM UI> Administration> Setting> Kerberos> "Kerberos Encryption Types" as - aes256-cts:normal - aes128-cts:normal - des3-hmac-sha1:normal - des-hmac-sha1:normal - des-cbc-crc:normal The error I see is that while ktutil executed the command addent it failed with "Bad encryption type while adding new entry" Therefore, ktutil failed to set -e encryption_type for all 5 encryption types you've specified, so there was nothing to be written into a keytab (wkt keytab) see: 'wkt /var/run/cloudera-scm-server/cmf8091152271730902012.keytab' The encryption type combination you've specified is valid for kadmin/kadmin.local tool where the -e parameter can be specified as encryption:salt, but it is not valid for ktutil -e encryption_type Since CM script is using ktutil you may need to remove the salt suffixed ':normal'. The salt :normal is default for Kerberos Version 5, you only need to set the encryption type [0] in CM UI> Administration> Setting> Kerberos> "Kerberos Encryption Types" Encryption Type - aes256-cts - aes128-cts - des3-hmac-sha1 - des-hmac-sha1 - des-cbc-crc Let me know if this helps, Michalis [0] https://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/kdc_conf.html#encryption-types Note: A feature request OPSAPS-29768 is in progress to not allow manual entry in "Kerberos Encryption Types"
... View more
04-04-2018
07:00 AM
Thank you jiahongchao, To understand correctly, you have noticed that the ERROR in the log is related to NTLM authentication. Per StackOverflow [0] javamail did not support NTLM as of mail.jar version 1.4.1 - though per your finding you have copied a newer version of mail.jar v1.4.5 that support NTLM, and resolved the issue. Based on mail.jar RELEASE notes I found that NTLM was implemented since mail version 1.4.3 [1a,b,c,d] though based on the mail.jar 1.4.4 release notes (which removed the jcifs.jar dependency) [2a,b] suggests that the NTLM implementation in 1.4.3 wasn't self-contained and had a dependency on jcifs.jar client library which is not distributed part of Cloudera Manager common_jars. I will file an improvement ticket, Michalis [0] https://stackoverflow.com/a/13861180/528634 [1a] http://www.oracle.com/technetwork/java/javamail/javamail145-1904579.html [1b] http://www.oracle.com/technetwork/java/javamail/javamail143-243221.html [1c] https://github.com/javaee/javamail/blob/JAVAMAIL-1_4_3/doc/release/NTLMNOTES.txt [1d] https://github.com/javaee/javamail/commits/JAVAMAIL-1_4_3/doc/release/NTLMNOTES.txt [2a] http://www.oracle.com/technetwork/java/javamail/javamail144-1562675.html [2b] https://github.com/javaee/javamail/commit/081f632e1fd8ae5fa6f96a0360dc9a681c3eaa79#diff-67c5eafbce726c691833c967315073d6R26
... View more
04-03-2018
03:56 PM
The BDR should run on different hosts, does your CM Server version match our release notes [0] Only use healthy HDFS/Hive hosts for launching replication jobs BDR Replication Host Selection Policy has been updated. The process that launches and coordinates a HDFS/Hive replication job will now only run on the following hosts: Hosts that run any role of the HDFS/Hive Service (for HDFS or Hive replication) Hosts that have a Non-Gateway role Hosts where the health status is in the GOOD or CONCERNING state with preference given to GOOD Hosts that are whitelisted, if configured Cloudera: OPSAPS-40040 [0] https://www.cloudera.com/documentation/enterprise/release-notes/topics/cm_rn_fixed_issues.html#id_pd4_vrn_l1b [1] https://www.cloudera.com/documentation/enterprise/release-notes/topics/cm_rn_fixed_issues.html#rn_593
... View more
04-03-2018
03:00 PM
May I ask what issue you are reporting, perhaps there's a workaround that I can help you with. Unless you are certain that an upgrade of the mail jar will resolve the issue. > Alert publiser can not send email in our company, since the java mail library it is using has problems,can you update the version? - How is the mail server configured? - How did you discover that the java mail library is the root cause? - Is it secure/insecure mail server (StartTLS)? - Can you outline steps to reproduce? - Can you share the log from alertpublisher (obfuscate sensitive info)?
... View more
11-14-2017
09:56 PM
Workaround would be to parse it in python/bash(tr,awk,sed)/perl to remove the line breaks, an easier option is to use ./jq -c -- see [0] curl -s -X GET -u xxxxx:xxxxx http://cm-server:7180/api/v11/.... | jq -c [0] https://stedolan.github.io/jq/manual/#Invokingjq --compact-output / -c:
By default, jq pretty-prints JSON output. Using this option will result in more compact output by instead putting each JSON object on a single line.
... View more
11-12-2017
02:23 AM
1 Kudo
Would you mind using the following code to check whether you're hitting a know issue. Copy the contents below into a filename OPSAPS-36374.py and $ python OPSAPS-36374.py -j deployment.json # deployment.json can be generated by fetching the CM API deployment endpoint
# see https://cloudera.github.io/cm_api/apidocs/v12/path__cm_deployment.html
# Example command:
# curl -s -x GET -u replace_with_cm_admin_user_here:cm_password http://CM-SERVER-HOST:PORT/api/v12/cm/deployment -o deployment.json #!/usr/bin/env python
"""
Purpose: Find gateway roles with configuration,
to validate if user is affected by OPSAPS-36374
Author: Michalis
"""
import json
import os
import sys, getopt
def main(argv):
json_file = ''
# deployment.json can be generated by fetching the CM API deployment endpoint
# see https://cloudera.github.io/cm_api/apidocs/v12/path__cm_deployment.html
# Example command:
# curl -s -x GET -u replace_with_cm_admin_user_here:cm_password http://CM-SERVER-HOST:PORT/api/v12/cm/deployment -o deployment.json
HELP = '%s -j <deployment.json>' % os.path.basename(__file__)
try:
opts, args = getopt.getopt(argv,"hj:",["json="])
except getopt.GetoptError:
print HELP
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print HELP
sys.exit()
elif opt in ("-j", "--json"):
json_file = arg
if json_file:
data = json.load(open(json_file))
cluster = [cluster_data for cluster_data in data['clusters']][0]
services = [service for service in cluster['services']]
hosts = [host_data for host_data in data['hosts']]
for service in services:
for role in service['roles']:
if 'GATEWAY' in role['type'] and role['config']['items']:
host = [host for host in hosts if host['hostId'] == role['hostRef']['hostId']][0]
role_kv = role['config']['items'][0]
print "Service: [%s] contains role type [%s] - configuration [key: %s - value: %s]" % (service['type'], role['type'], role_kv['name'], role_kv['value'])
print "Affected instance hostname: %s // ipAddress: %s" % (host['hostname'], host['ipAddress'])
else:
print HELP
if __name__ == "__main__":
main(sys.argv[1:])
... View more
10-30-2017
09:27 PM
Based on the information provided, I can tell you're using the CM API python example [0], the next step of the script is to auto assign/configure roles [1]. Per your subject 'Stuck downloading parcel' also, based on your info ' I can see from the logs that the host was added.' Can you let me know what is the state in the CM UI, CM> Hosts> Parcels and CM> 'All Recent Commands' do you see if the parcels Downloaded/Available or currently deploying, are there any errors in all recent commands? You could also check this [2] CM API based deployment. Let me know if this helps. Best, Michalis [0] https://github.com/cloudera/cm_api/blob/cm5-5.13.0/python/examples [1] https://github.com/cloudera/cm_api/blob/cm5-5.13.0/python/examples/cluster_set_up.py#L96-L100 [2] https://github.com/gdgt/cmapi
... View more
10-30-2017
08:52 PM
If I understand correctly, you are looking for an _Event_ record, "Cloudera Manager Events" "An event is a record that something of interest has occurred – a service's health has changed state, a log message (of the appropriate severity) has been logged, and so on..." [0]; You can filter events for historical health events navigating to CM> Diagnostics> Events, you can also use the CM API to query events in the system [1]. Let me know if this helps. Best, Michalis [0] https://www.cloudera.com/documentation/enterprise/latest/topics/admin_cm_events.html https://cloudera.github.io/cm_api/apidocs/v18/path__events.html [1] https://cloudera.github.io/cm_api/apidocs/v18/path__events.html
... View more
10-16-2017
02:07 AM
JoaquinS, Based on your screenshot it appears that there's a duplicate UUID [0] hearbeating into CM Server The 'totoro' host is registered in the CM database with 13 Role(s) contains a different UUID (with no heartbeat), the one below it is hearbeating with new UUID. Make sure you take a backup of /var/lib/cloudera-scm-agent/ then, follow the guide in [0] and it should resolve the issue. Regards, Michalis [0] https://community.cloudera.com/t5/Cloudera-Manager-Installation/Duplicate-hostname-in-Cloudera-Manager-5-7-2/td-p/55649
... View more
09-24-2017
12:08 AM
The metrics looks right based on the screenshot desind@*******:~#> df -h /kafka/data/sd*
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb 985G 4.0G 931G 1% /kafka/data/sda
/dev/xvdc 985G 72M 935G 1% /kafka/data/sdb
/dev/xvdd 985G 16G 920G 2% /kafka/data/sdc
/dev/xvde 985G 3.8G 931G 1% /kafka/data/sdd
Taking /dev/xvdb as an example
sum(Size-Avail) = 54G [Used] - where df -h shows 4.0G [Used]. And CM correctly shows ( ± rounding ) 53.8G/984.2G Note, there's also 5% reserve for few Linux file system see [0]. With regards to df -h the article in [1] elaborates "Why The Linux df Command Shows Lesser Free Disk Space?" [0] "Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. Normally, the default percentage of reserved blocks is 5%." from man tune2fs [1] http://www.walkernews.net/2011/01/22/why-the-linux-df-command-shows-lesser-free-disk-space/
... View more
09-19-2017
09:52 AM
1 Kudo
Quote: However, I tried different API versions using the following url: http://172.31.5.254:7180/api/v1 Based on the documentation linked previously, the endpoint you're trying to use was "Added in v3:" With regards to the version check http://172.31.5.254:7180/api/version - the returned output can be then used in http://172.31.5.254:7180/api/v... In addition this is a REST POST, not a GET, the expected response Body after the POST is documented here [0] Let me know if this helps [0] https://cloudera.github.io/cm_api/apidocs/v17/path__clusters_-clusterName-_services_-serviceName-_commands_deployClientConfig.html
... View more
09-19-2017
08:10 AM
Perhaps /v17/ isn't applicable for your version of CM - check the /api/version endpoint and see what version your CM supports/returns with. Here's a table that map version of CM API to CM version http://cloudera.github.io/cm_api/docs/releases/
... View more
09-15-2017
01:07 AM
Quote: "Do you know when the new 5.12.x maintenance release will be available?" It's available now; [ANNOUNCE] Cloudera Enterprise 5.12.1 Released [0] [0] http://community.cloudera.com/t5/Community-News-Release/ANNOUNCE-Cloudera-Enterprise-5-12-1-Released/m-p/59612#M195
... View more
09-14-2017
06:08 AM
1 Kudo
You can use the python/or curl to achieve this. Cluster wide [0] example using curl
curl -X POST -u admin:admin -H "Content-Type:application/json" http://cm-hostname:port/api/v17/clusters/{clusterName}/commands/deployClientConfig Service/role client config [1] [0] https://cloudera.github.io/cm_api/apidocs/v17/path__clusters_-clusterName-_commands_deployClientConfig.html https://github.com/cloudera/cm_api/blob/master/python/src/cm_api/endpoints/clusters.py#L292-L298 # During restart of the cluster https://github.com/cloudera/cm_api/blob/master/python/src/cm_api/endpoints/clusters.py#L263-L281 [1] https://cloudera.github.io/cm_api/apidocs/v17/path__clusters_-clusterName-_services_-serviceName-_commands_deployClientConfig.html https://github.com/cloudera/cm_api/blob/master/python/src/cm_api/endpoints/services.py#L867-L874
... View more
09-12-2017
10:29 AM
It seems that you are running into the same issue you've reported initially. If the workaround doesn't work, you can use the CM 5.12.1 contain the fix to allow you to proceed with the installation.
... View more
09-05-2017
09:43 AM
Hi Amit, Looking into the errors your've posted the initial issue can be likely resolved using the workaround I've posted initially. The second issue is likely that an old apt thread is locking the package manager, see below. You might need to kill the other process which is holding the lock, once you've done so it will allow the package manager to proceed with the installation. ... E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? ... Best, Michalis
... View more
09-01-2017
02:28 PM
Hi Amit, The solution I've provided was for the issue you've reported initially/at the begining of this thread, I can see that you are reporting multiple issues. Would you mind clarifying what you meant "Same error is there.", this is so I or a community member can guide you accordingly. Thank you, M
... View more
08-31-2017
10:42 AM
The bash for loop that you executed indicated which mount point is inaccessible, you should try umounting that and see if you can progress. root@PC355406:/# for mnt in $(mount|cut -d ' ' -f 3); do stat $mnt 1>/dev/null 2>&1; rc=$?; if [ $rc -ne 0 ]; then echo "error accessing $mnt"; fi; done
error accessing /run/user/1000/gvfs
... View more
08-29-2017
02:54 AM
1 Kudo
based on your stack trace, it's likely you're running into the same issue as a fellow community member [0], can you try the workaround posted in the [1] [24/Aug/2017 15:47:21 +0000] 9533 MainThread agent ERROR Failed to connect to previous supervisor.
>>Traceback (most recent call last):
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 2109, in find_or_start_supervisor
>> self.configure_supervisor_clients()
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 2290, in configure_supervisor_clients
>> supervisor_options.realize(args=["-c", os.path.join(self.supervisor_dir, "supervi-sord.conf")])
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 1599, in realize
>> Options.realize(self, *arg, **kw)
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 333, in realize
>> self.process_config()
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 341, in process_config
>> self.process_config_file(do_usage)
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 376, in process_config_file >> self.usage(str(msg))
>> File "/usr/lib/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/options.py", line 164, in usage
>> self.exit(2) >>SystemExit: 2 >>[24/Aug/2017 15:47:21 +0000] 9533 Dummy-1 daemonize WARNING Stopping dae-mon.
>>[24/Aug/2017 15:47:21 +0000] 9533 Dummy-1 agent INFO Stopping agent... [0] https://community.cloudera.com/t5/Cloudera-Manager-Installation/Cloudera-Manager-Heartbeat-Python-Supervisor-Issue-System-Exit/m-p/57547 [1] http://community.cloudera.com/t5/Cloudera-Manager-Installation/CDH-5-12-0-clouder-manager-agent-can-not-start/m-p/58726
... View more
08-17-2017
04:59 PM
"A re SSL-enabled custom parcel repositories supported?" yes, as we can use this https://archive.cloudera.com/cdh5/parcels/ Ths error is likely due to the AWS Cloudfront SNI [0], and the current version of async-http-client-1.7.5.jar that CM uses does not support SNI, see related [1a,b,c] Testing the URL with openssl excluding the -servername flag # openssl s_client -connect repository.cask.co:443
CONNECTED(00000003)
140455651178400:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:744:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 247 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
--- with -servername [repository|downloads].cask.co # openssl s_client -connect repository.cask.co:443 -servername repository.cask.co </dev/null | grep "Verify"
depth=4 C = US, O = "Starfield Technologies, Inc.", OU = Starfield Class 2 Certification Authority
verify return:1
depth=3 C = US, ST = Arizona, L = Scottsdale, O = "Starfield Technologies, Inc.", CN = Starfield Services Root Certificate Authority - G2
verify return:1
depth=2 C = US, O = Amazon, CN = Amazon Root CA 1
verify return:1
depth=1 C = US, O = Amazon, OU = Server CA 1B, CN = Amazon
verify return:1
depth=0 CN = repository.cask.co
verify return:1
DONE
Verify return code: 0 (ok) setting -Djavax.net.debug=all in /etc/default/cloudera-scm-server [...] export CMF_JAVA_OPTS="... -Djavax.net.debug=all" [...] Notice handshake_failure *** ClientHello, TLSv1 [...] [Raw read]: length = 2
0000: 02 28 .(
New I/O worker #6, READ: TLSv1 Alert, length = 2
New I/O worker #6, RECV TLSv1 ALERT: fatal, handshake_failure
New I/O worker #6, fatal: engine already closed. Rethrowing javax.net.ssl.SSLException: Received fatal alert: handshake_failure
New I/O worker #6, fatal: engine already closed. Rethrowing javax.net.ssl.SSLException: Received fatal alert: handshake_failure
New I/O worker #6, called closeOutbound()
New I/O worker #6, closeOutboundInternal()
New I/O worker #6, SEND TLSv1 ALERT: warning, description = close_notify
New I/O worker #6, WRITE: TLSv1 Alert, length = 2
New I/O worker #6, called closeInbound()
New I/O worker #6, fatal: engine already closed. Rethrowing javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
[Raw write]: length = 7
0000: 15 03 01 00 02 01 00 .......
New I/O worker #6, called closeOutbound()
New I/O worker #6, closeOutboundInternal() We currently track this internally in /OPSAPS-30976/ Regards, Michalis [0] https://aws.amazon.com/about-aws/whats-new/2014/03/05/amazon-cloudront-announces-sni-custom-ssl/ [1a] https://groups.google.com/forum/#!topic/play-framework/T7ZhclgAAMU [1b] https://github.com/loopj/android-async-http/issues/224 [1c] https://bz.apache.org/bugzilla/show_bug.cgi?id=57935
... View more