Member since
07-23-2014
40
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2711 | 09-05-2014 02:01 PM | |
1337 | 08-08-2014 04:30 PM | |
1131 | 07-31-2014 02:14 PM |
09-05-2014
02:01 PM
I tested and worked! Thanks a lot! What i used is the function below: reate_hdfs_tmp(self) source code Create the /tmp directory in HDFS with appropriate ownership and permissions. Returns: Reference to the submitted command Since: API v2
... View more
09-05-2014
01:51 PM
probably this is the answer. i will check this... create_hdfs_tmp
... View more
09-05-2014
01:50 PM
which function for this API? sorry, i couldn't find the right one.
... View more
09-05-2014
01:39 PM
do you know how to use API to run the command? I was trying to use API to automate cdh installation. what's the best way to do that? thanks a million!
... View more
09-05-2014
12:10 PM
I were tying to follow the example (http://blog.cloudera.com/blog/2012/09/automating-your-cluster-with-cloudera-manager-api/) to create and start MR service. The task tracker can be started, the job tracker can't be started due to the following file permission reason. My HDFS serivce use 'simple' permission. who can tell me if i need further configuration to grant user=mapred to write the folder Using CM API?? Thanks a lo! 2014-09-05 12:04:44,161 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://dhcp-corp-233.sc-cig-eng.tst:8020/tmp/mapred/system) because of permissions.
2014-09-05 12:04:44,162 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
2014-09-05 12:04:44,163 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
-
HDFS
-
Security
08-25-2014
01:14 PM
I am wondering if there is any sample code for installing a client parcel using cloudera manager API? All i can find so far is just REST API list. Is there any python-based API llist? Thanks a lot!
... View more
Labels:
- Labels:
-
Cloudera Manager
08-08-2014
04:30 PM
I resolved my problem by finding out this page:https://github.com/cloudera/cm_ext/wiki/Plugin-parcel-environment-variables
... View more
08-07-2014
11:43 AM
In my own parcel, I was trying to extend some client jar library into hadoop java classpath. First, I extend my jar path to "HADOOP_CLASSPATH"; I found even my jar can be shown in "hadoop classpath", when running MR jobs, my jars can't be found; Then, I manually put my jar into "/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop-mapreduce" it works. So I tried to extend jar path to "CDH_MR2_HOME" variable in my parcel.jason Then I realized that for each service restart, "CDH_MR2_HOME" variable was first setup by my parcel and then overriden by CDH parcel. It turned out that such enviornment variable path can't be extended in this way. For MR2, is there any enviornment variables, that can be extended for MR jobs? especially using parcel?
... View more
Labels:
08-01-2014
03:46 PM
I added several properties in the textbox of "Cluster-wide Advanced Configuration Snippet (Safety Value) for core-site.xml", after restarting hdfs services, the new properties are not in core-site.xml in any of hosts. I checked core-site.xml in /etc/hadoop/conf/core-site.xml at each host, there is no change; while the core-site file from "Download Client configuraiton" reflects the changes. What's wrong with it?
... View more
07-31-2014
04:27 PM
I would like to update the core-site xml and replicate the changes across the whole cluster. What's the usual way to do that in Cloudera? Is the safty value configuration on hdfs service the only option? if yes, could you should an example how to fill in the web console? just copy the entire property portion for update?
... View more
Labels:
- Labels:
-
HDFS