Member since
01-08-2018
133
Posts
31
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17324 | 07-18-2018 01:29 AM | |
3105 | 06-26-2018 06:21 AM | |
5269 | 06-26-2018 04:33 AM | |
2709 | 06-21-2018 07:48 AM | |
2241 | 05-04-2018 04:04 AM |
04-17-2018
09:24 AM
Found it https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html Additionally, since Cloudera does not support mixed environments, all nodes in your cluster must be running the same major JDK version. Cloudera only supports JDKs provided by Oracle. In any case you are doing upgrade, you will not have this status for long time.
... View more
04-17-2018
09:14 AM
Sorry, I thought you were asking for major upgrade. For minor upgrades of course you can do a rolling upgrade. I have personally tested it. I cannot find it know because I am replying from my phone, but I think there is a comment in documentation that the mandatory is to have the same major version, it does not mention minor. But in any case I have done it several times in production environment and works without any issue.
... View more
04-17-2018
08:30 AM
No you cannot, according to third bullet (from https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_cm_upgrading_to_jdk8.html) Warning:
* Cloudera does not support upgrading to JDK 1.8 while upgrading to Cloudera Manager 5.3 or higher. The Cloudera Manager Server must be upgraded to 5.3 or higher before you start.
* Cloudera does not support upgrading to JDK 1.8 while upgrading a cluster to CDH 5.3 or higher. The cluster must be running CDH 5.3 or higher before you start.
* Cloudera does not support a rolling upgrade to JDK 1.8. You must shut down the entire cluster.
* If you are upgrading from a lower major version of the JDK to JDK 1.8 or from JDK 1.6 to JDK 1.7, and you are using AES-256 bit encryption, you must install new encryption policy files. (In a Cloudera Manager deployment, you automatically install the policy files; for unmanaged deployments, install them manually.) See Using AES-256 Encryption.
For both managed and unmanaged deployments, you must also ensure that the Java Truststores are retained during the upgrade. (See Recommended Keystore and Truststore Configuration.)
... View more
04-17-2018
08:27 AM
1 Kudo
I use the same command and have no issues. According to logs: Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1523546159827_0013_r_000000_0/map_0.out So, I would guess that you csv is too big and when the reducer tries to load it, there is no sufficient space in local dirs of YARN nodemanager. Can you try set more reducers by using : --reducers 4 or more (based on your partitions and the csv size). You can also set more mappers, but based on log the reducer is suffering. More details: https://www.cloudera.com/documentation/enterprise/5-13-x/topics/search_mapreduceindexertool.html#concept_pjs_3sd_3v
... View more
04-17-2018
08:07 AM
You should not delete anything under /opt/cloudera and /var/lib directories. If the contents of these directories are too high for your partitions, then you should consider extending them. There is an exception in /var/lib/ but again you should not delete manually. The only place you can delete files without issues is "/var/log/..." but this is a temporarily solution. The "proper" way is to change "Max Log Size" and "Maximum Log File Backups" in Cloudera Manager, for each service is running on this machine. Edit: I started writing before I see the reply from @saranvisa. I agree with this.
... View more
04-17-2018
08:03 AM
1 Kudo
First of all, you have to stop Navigator. Then it depends on what you decide. The easiest approach (not requiring changes in configuration) is to move the directory and create a link: # mv /var/lib/cloudera-scm-navigator/solr /data1/
# ln -s /data1/solr /var/lib/cloudera-scm-navigator/solr Although I would prefer to move the whole navigator directory: # mv /var/lib/cloudera-scm-navigator/ data1/
# ln -s /data1/cloudera-scm-navigator /var/lib/cloudera-scm-navigator The last approach is again to move the directory without link # mv /var/lib/cloudera-scm-navigator/ data1/ and change Navigator Metadata Server Storage Dir nav.data.dir in Cloudera Manager to /data1/cloudera-scm-navigator Hope, that helps. As mentioned above, I prefer the second approach, because all directory is in one partition plus, someone new in your team won't have to check CM for the new location.
... View more
04-17-2018
04:55 AM
I believe it will help if you can create additional areas under http://community.cloudera.com/t5/Configuring-and-Managing-the/ct-p/ConfiguringPlatform for CDS, CDK etc. Currently there are only the following three: Cloudera Manager Cloudera Director CDH
... View more
04-17-2018
01:47 AM
Hi.First of all sorry for late reply, I was out for some time. According to this "yyyy-mm-dd hh:mm:ss[.f...]", yes you have to store it in UTC. In order to be able to store date in other timezones, the format should include the "Z" which is hours from UTC.
... View more
04-17-2018
01:36 AM
1) User hdfs does not have access to the /home/cloudera directory 2) and 3) is actually the same because in both cases you try to upload the file as user cloudera. You have two options: 1) grant read permissions to hdfs user in /home/cloudera and all sub-contents (directory access require also execute permission) 2) grant write permissions in "/inputnew/" directory in HDFS , to "cloudera" user. example: sudo -u hdfs hdfs dfs -chown cloudera /inputnew There are multiple ways to grant permissions (e.g. using ACLs), but keep it simple.
... View more
04-17-2018
01:19 AM
2 Kudos
This is a mis-leading of the "free" output. The first line (starting with "Mem") displays that you have 62G of memory and 56G are used. This memory is used but not from procesess. At the end of the line, you will see a number of 39G cached. In few words, Linux uses a part of free RAM to store data from files used often, in order to save some interactions with the hard disk. Once an application request memory and there is no "free", Linux automatically drops these caches. You cannot turn this feature off. The only thing you can do is just drop the current cached data, but Linux will store something the very next second. In any case, when the output of "free" is similar to the one you provided, you should always refer to the second line "-/+ buffers/cache: 16G 49G" This is the real status, which show "16G" used and "49G" free. Finally, CM displays the disk and memory usage of the host (in Hosts view) regardless of what process is using it. It is the same output as "free".
... View more