Member since
03-15-2016
26
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6308 | 12-20-2016 03:05 AM |
07-19-2020
02:11 PM
Hello I wanted to test Ozone so I installed CDP 7.1.1 and setup OzoneFS according to the not-so-accurate documentation. The FS itself seems to work. I am able to use hdfs commands to see files in ozone, create directories and upload local files. However, YARN does not work so well with Ozone. If I try to create jobHistory directory or install MapReduce framework jars from the Yarn CM Menu then it fails and says it does not recognize the OzoneFS jars. I get: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.ozone.OzoneFileSystem not found I tried adding the jar (hadoop-ozone-filesystem-lib-current-0.5.0.7.1.1.0-565.jar) to MR application classpath and YARN application classpath. I also tried to add as the value of mapreduce.application.classpath in mapred-site.xml, But none of them made it recognize the class and work. Did anyone have success running YARN over Ozone ? What am I missing ? Thank you Guy
... View more
Labels:
- Labels:
-
Apache YARN
06-27-2020
12:16 PM
Hello I installed CDP 7.1.1 because I wanted to try Ozone. I played a little with apache Ozone in the past and wanted to see how it is in Cloudera. After installation, I went ahead and tried to create a volume using this command (which complies with the documentation): ozone sh volume create --quota=1TB --user=hdfs /tests I got this error message: "Service ID or host name must not be omitted when ozone.om.service.ids is defined." I searched the internet for the error and did not find anything useful. I looked for ozone-site.xml file and found that ozone.om.service.id was "ozone1". But I do not know where in the command should I specify it. the command syntax does not take any service id or host name. Why am I getting this error ? how can I specify service id or host name so the volume will be created ? Thanks Guy
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
06-15-2020
01:10 PM
I used the parcel repo you supplied along with manual installation and it worked. Thank you Guy
... View more
06-14-2020
01:56 PM
Hello I have some cloudera installation experience including production clusters of CDH 5 and 6. I once installed the CDP 7.0.3 trial version using the automatic installation script and it was good. Now I wanted to see how the latest CDP 7.1.1 looks like. So I followed the instructions and downloaded cloudera-manager-installer.bin script. Then I prepared several virtual CentOs 7.6 hosts with OpenJDK 8 and SELinux turned off and ran the script on one of them. It was running fine for a while, then after installing the embedded database it halted for a long time. The installation logs looked like this: -rw-r--r-- 1 root root 0 Jun 12 16:16 0.check-selinux.log -rw-r--r-- 1 root root 0 Jun 12 16:16 1.install-repo-pkg.log -rw-r--r-- 1 root root 1843 Jun 12 16:20 2.install-openjdk8.log -rw-r--r-- 1 root root 2205 Jun 12 17:03 3.install-cloudera-manager-server.log -rw-r--r-- 1 root root 22355 Jun 12 17:03 4.check-for-systemd.log -rw-r--r-- 1 root root 3824 Jun 12 17:03 5.install-cloudera-manager-server-db-2.log -rw-r--r-- 1 root root 0 Jun 12 17:03 6.daemon-reload.log -rw-r--r-- 1 root root 0 Jun 12 17:03 7.start-cloudera-scm-server-db.log -rw-r--r-- 1 root root 0 Jun 12 17:03 8.start-cloudera-scm-server.log Note that the last Three steps log files are empty. Step 5 finished successfuly: Installed: cloudera-manager-server-db-2.x86_64 0:7.1.1-3274282.el7 Dependency Installed: postgresql10.x86_64 0:10.12-1PGDG.rhel7 postgresql10-libs.x86_64 0:10.12-1PGDG.rhel7 postgresql10-server.x86_64 0:10.12-1PGDG.rhel7 Complete! I let it run overnight but it still did not finish. I looked at cloudera-scm-server.log and saw it is spinning endlessly on those two lines: 2020-06-14 16:39:04,574 INFO pool-201-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: Syncup is started. 2020-06-14 16:39:34,594 INFO pool-201-thread-1:com.cloudera.server.cmf.components.CmServerStateSynchronizer: Cleanup is started Every minute another cycle begun, but there was no other action in the log. I searched those two lines in google but did not come up with significant results. I deleted the VM, and created a new one but ended up with the same result. Did anyone succeed in installing a trial cluster from this installation script ? Is it a known issue ? I then tried a manual approach. I found the cloudera trial yum repository: [cloudera-manager] name=Cloudera Manager 7.1.1 baseurl=https://archive.cloudera.com/cm7/7.1.1/redhat7/yum/ gpgkey=https://archive.cloudera.com/cm7/7.1.1/redhat7/yum/RPM-GPG-KEY-cloudera gpgcheck=1 enabled=1 autorefresh=0 type=rpm-md I used it to install the embedded database, scm-server and the daemons. In the end of the installation I saw the same two lines spinning in the log. I restarted scm server and this time it started properly and I was able to log in and tried to install a cluster. However, I was'nt able to find the 7.1.1 parcels for trial. All the predefinded parcel repositories required credentials. The highest version parcels I could get was 6.3.2. This is a trial version, I expect it to point to a place where I can download 7.1.1 trial parcels. Is there a working way to install 7.1.1 trial cluster ? What am I missing here ? Thanks you Guy
... View more
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
05-26-2018
11:01 PM
Hello There is something I do not fully understand about HDFS and I would be glad if someone here can clarify it for me. In regular setup, where there are a namenode and a secondary namenode, the secondary namenode is responsibe for checkpoints (merging edits file into fsimage). In a High availability setup, where there are active namenode and a standby namenode, the standby namenode is doing the checkpointing. But wht happens in a High availability setup when the active namenode is down or destroyed ? The standby namenode is promoted to be active but it is alone now. There is no standby/secondary NN. And still the cluster should continue functioning as long as the remaining NN is up. Who is doing the checkpointing in this case ? Is it the surviving Namenode ? Or maybe checkpointing halts unyil someone brings the secod namenode up ? How does it work ? Thanks you Guy
... View more
Labels:
- Labels:
-
HDFS
05-14-2018
01:07 PM
Hello We have a pretty old CDH 5.7 cluster that works fine. But when we try to add a second Resource manager and enable high availability, both RM's remain in standby state and there is no active one. This seems to be a known issue and the suggested fix is to run "yarn resourcemanager -format-state-store". Cloudera itself recoomends it here (search for "standby") and so does other articles on the web. However, running this and restarting the RM's did not solve our problem. I also couldn't find anything special in the logs, and to make things even more strange, we have another 5.7 cluster where we successfuly enabled YARN high availability without issues. Does anyone have an idea what's wrong ? Did anyone have such issue ? Thanks Guy
... View more
Labels:
- Labels:
-
Apache YARN
09-12-2017
12:46 AM
Hi I did restart Namnode, but it doesn't seem to have any effect. Im not sure I'm hitting the bug you mentioned because I was trying to read files from a client program. Anyway, I increased the value of ipc.maximum.data.length and it did not help. I changed it in "HDFS Service Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml", maybe it's not the right place ? Thanks Guy
... View more
09-11-2017
06:11 AM
Hello I am running a CDH 5.12 cluster and I have some avro files on HDFS. I wrote a Java program that should connect to HDFS and deserialize the avro files. The files aren't very big, only 10-20 Mb each, However, whenever I try to run my program it throws this exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length I googled it and found that some people advised to increase the parameter "ipc.maximum.data.length" to 128Gb. I did that but the error persists. Does anyone have an idea what can be the problem ? Maybe this is just a cover for another problem ? Thank you Guy
... View more
Labels:
- Labels:
-
HDFS
12-20-2016
03:05 AM
Hello I noticed that in the creation of the first keystore I did not change the CN to the appropriate value, so I had inconsistency between the keystore on the first host and on the other nodes. I tried the whole process again with the appropriate CN (I had snapshots from before the change) and this time it worked ! Just to be sure it's not an accident I will do the whole thing again. Thank you very much for your help Guy
... View more
12-19-2016
01:21 PM
Hello I changed the alias to be the server name where scm server is running and I still have the same problem. Clousera management service does not start (or not communicating) and the log shows "certificate_unknown" messages. Here is the result of the find command on the scm server node: /etc/pki/java/cacerts /etc/pki/ca-trust/extracted/java/cacerts /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hue/build/env/lib/python2.6/site-packages/boto-2.42.0-py2.6.egg/boto/cacerts /opt/cloudera/parcels/CDH-5.6.0-1.cdh5.6.0.p0.45/lib/hue/build/env/lib/python2.6/site-packages/boto-2.38.0-py2.6.egg/boto/cacerts /usr/java/jdk1.8.0_111/jre/lib/security/cacerts /usr/java/jdk1.7.0_67-cloudera/jre/lib/security/cacerts /usr/java/jdk1.6.0_31/jre/lib/security/cacerts I was using the one under java 1.7.0_67 Thanks Guy
... View more