Member since
08-29-2013
79
Posts
37
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7987 | 07-28-2015 12:39 PM | |
3217 | 07-22-2015 10:35 AM | |
4463 | 03-09-2015 06:45 AM | |
6376 | 10-28-2014 09:05 AM | |
15251 | 10-24-2014 11:49 AM |
01-22-2014
08:25 AM
Hi Sergey, Is this a free or trial-licensed CM deployment? For some reason does this deployment lack the "Reports Manager" role or any other roles missing from the 'mgmt1' (Management Services) section?
... View more
01-22-2014
08:24 AM
Hi Sergey, For some reason do you lack a Role type of "Balancer" within that HDFS config? A Balancer role must be present in order for that option to be available. Check in HDFS > Instances to see if a Balancer exists. If not, use the Add button to assign it to one host (it's not terribly critical which host you assign it to).
... View more
01-06-2014
02:21 PM
2 Kudos
Hi Matthew, The postgres instance in question here may be in quick shutdown mode where it's honoring only existing connections but not servicing new ones. The postgres instance that you are probably using is 'cloudera-scm-server-db', not just postgresql. Check # service cloudera-scm-server-db status # psql -U scm -p 7432 then provide the password as found in /etc/cloudera-scm-server/db.properties to login if desired.
... View more
11-19-2013
07:30 AM
4 Kudos
Hi there, The "[Errno 99] Cannot assign requested address" points to hostname resolution rather than a port conflict. Please run this on the node: $ python -c 'import socket; print socket.getfqdn(), socket.gethostbyname(socket.getfqdn())' This should return the fully-qualified domain name as well as the IP address, confirming forward and reverse name resolution. Sanity-check this output against: $ dig NexusHadoopVM $ dig -x [IP returned in above dig command] You may also wish to check your /etc/hosts file to make sure everything is OK there. Regards, --
... View more
10-30-2013
02:43 AM
1 Kudo
Did you apply this new dfs value to ALL datanode roles globally, or did you set it as an override to just one datanode (perhaps you have a node with fewer or more data volumes for a datanode, and you wanted it to inherit different values than the rest of the DN's). If you've set this as a "one-off" configuration, look for an overrride [1] within the HDFS > Configuration > View & Edit for the property you've altered. More to the point, this configuration would be picked up for the scope in which you applied it. If globally, you'd find that after restarting Datanodes that the DN's are using the new values set. A new hdfs-site.xml is generated each time a service starts, and these can be referenced by going to Services > hdfs > Instances > [click on a Datanode from the list] > Processes Then on this page, expand the link near the middle of the page that says "Show" under "Configuration Files/Environment". This will give links to ALL the configuration files used for that specific processes start of the Datanode. The hdfs-site.xml should reflect the dfs.data.dir settings you applied, if it is intended to be a recipient of that property/value. If it is not, then there could be some other explanation such as use of Role Config Groups, or unintended overrides. [1] = http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Managing-Clusters/cmmc_mod_configs.html?scroll=cmug_topic_5_3_1_unique_1__title_149_unique_3
... View more
09-17-2013
03:04 PM
1 Kudo
Hi David, The credentials you provide at time of initial install are used only for that point in time. You could revoke, remove or alter the credentials you provided, and are not stored. In the event that you use the "Host Upgrade Wizard" to upgrade the agents on each node for you the next time you upgrade Cloudera Manager, you'd be prompted anew to enter the credentials needed to perform the upgrade operation. You could provide at that time your account with passwordless sudo for that one-time use. Regards, -- Mark Schnegelberger
... View more
09-17-2013
06:29 AM
1 Kudo
One point of clarification from your first post, since you're setting parcel directories for two distinct uses: 1. Parcels originate in a Remote Repository (eg. archive.cloudera.com, or your own mirror web server) 2. Cloudera Manager server retrieves parcels and checksums from the Remote Repository and hosts them in /opt/cloudera/parcel-repo. This location configurable via CM UI > Hosts > Parcels > Edit Settings > Local Parcel Repository Path 3. Cluster Nodes retrieve parcels from the CM Server Local Repository. Each node uses /opt/cloudera/parcel-cache /opt/cloudera/parcels These locations are configurable via /etc/cloudera-scm-agent/config.ini on each respective node. Visualized this looks like:
... View more
09-17-2013
06:12 AM
Thanks for your report. This is a confirmed issue, and a very near-term release will address pardel_dir= not properly enumerating parcels when a symlink is present. You likely already discovered that you can in the short term change parcel_dir= to the end target and restart the agent, though we'll get this resolved ASAP. Best, --
... View more
09-10-2013
09:36 PM
Hola Manuel, Bienvenido a la comunidad en linea de Cloudera. Este error se descubrió hace algunos dias, y por lo mismo hemos creado la versión 4.7.1 de Cloudera Manager. Se le recommienda actualizar hasta 4.7.1 para obtener este fix, y puedes leer los detalles sobre lo mismo en el documento "Release Notes". ================ This error in Cloudera Manager 4.7.0 was quickly discovere and CM 4.7.1 was rapidly released to address it. The Release Notes here discuss the item that is fixed to remove this error. I would advise that you upgrade directly up to the dot-release 4.7.1 which will fix the error. Respetuosamente, Mark S. | Cloudera
... View more
09-10-2013
06:39 AM
The cloudera-manager-installer.bin uses an ncurses / text-based menu if invoked from the commandline, though if you do have gnome or similar it will display properly as well.
... View more
- « Previous
- Next »