Member since
07-08-2013
548
Posts
59
Kudos Received
53
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2594 | 08-17-2019 04:05 PM | |
2563 | 07-26-2019 12:18 AM | |
8840 | 07-17-2019 09:20 AM | |
5032 | 06-18-2018 03:38 AM | |
12620 | 04-06-2018 07:13 AM |
05-20-2020
10:07 AM
rpm -qlp http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15/RPMS/x86_64/cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm | grep scm_prepare_database.sh I've noticed that the url you've given is /5.15/ that redirects to the latest maintenance release of the CM 5.15 which is 5.15.2. And you've given file "cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm" 5.15.0 which is not present in that folder. For your rpm -qlp to work try the /5.15.0/ rpm -qlp http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15.0/RPMS/x86_64/cloudera-manager-daemons-5.15.0-1.cm5150.p0.62.el7.x86_64.rpm | grep scm_prepare_database.sh
... View more
03-18-2020
09:25 AM
I'm using rest curl to extract BDP jobs status from history, and calculating the total data volume and avg replication time for each job, its talking over 9 hours to complete with huge file. Is it possible to have filter to extract last 24 hours BDP jobs only to reduce time and file size? Thanks, Scott
... View more
09-09-2019
05:43 AM
Where can I find that version file?
... View more
08-29-2019
08:12 PM
Hi Michalis: I solved this by re-run the command: sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm scm mypassword This problem is due to my misunderstanding for scm_prepare_database.sh. Thanks a lot for your help. Kind regards
... View more
08-17-2019
04:05 PM
1 Kudo
This was reported as a bug, and has already been fixed in CM 6.3.0, 6.2.1 as part of OPSAPS-49111
... View more
08-02-2019
05:24 AM
Hi Friends, I got same issue on my cluster. I did following steps : 1.the CM UI> Hosts> click (affected host)> click [Processes] tab to see what processes are running on that host. 2. Stop the all process on running host. 3. You need to run the Deploy Kerberos Client Configuration and it will deploy on affected host. The above steps has helped me to solved this issue on cluster.
... View more
07-26-2019
02:18 PM
> Is there any option to find empty directory using HDFS command Directly? You can get a list/find empty directories using the 'org.apache.solr.hadoop.HdfsFindTool'. And using the hdfs tool to check/test if _a_ directory is empty, you can use -du or -test; please see the FileSystemShell [0] test
Usage: hadoop fs -test -[defsz] URI
Options:
-d: f the path is a directory, return 0.
-e: if the path exists, return 0.
-f: if the path is a file, return 0.
-s: if the path is not empty, return 0.
-r: if the path exists and read permission is granted, return 0.
-w: if the path exists and write permission is granted, return 0.
-z: if the file is zero length, return 0.
Example:
hadoop fs -test -e filename du
Usage: hadoop fs -du [-s] [-h] [-x] URI [URI ...]
Displays sizes of files and directories contained in the given directory or the length of a file in case its just a file.
Options:
The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files. Without the -s option, calculation is done by going 1-level deep from the given path.
The -h option will format file sizes in a “human-readable” fashion (e.g 64.0m instead of 67108864)
The -x option will exclude snapshots from the result calculation. Without the -x option (default), the result is always calculated from all INodes, including all snapshots under the given path.
The du returns three columns with the following format:
size disk_space_consumed_with_all_replicas full_path_name
Example:
hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error. [0] https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html
... View more
07-26-2019
07:05 AM
Thanks for confirming they are platform-specific!
... View more
05-20-2019
09:22 PM
Hey , How to enable Sentry HA using cm_api module in python ?
... View more
05-06-2019
05:34 AM
Since the question was asked, the situation has changed. As soon as Hortonworks and Cloudera merged, NiFi became supported by Cloudera. Shortly after the integrations with CDH were also completed, so that NiFi is now a fully supported and integrated component. Please look into the documentation for the latest info at any time, but in general Cloudera Manager is now able to install NiFi.
... View more