Member since
09-24-2015
33
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3608 | 12-13-2016 01:46 AM | |
5258 | 08-31-2016 12:20 AM | |
3678 | 10-07-2015 09:19 AM |
04-24-2018
02:33 AM
Thanks, I got the cause of this issue . This is specifc to my code validation , there is no issue with cdh cluster . So closing this Issue . Thanks
... View more
04-23-2018
10:16 PM
I am getting below error when creating directory /user/testuser/data through API . I have created shell user as testuser I have change owner of /user/testuser I can create the directory successfuly through shell command hdfs dfs -mkdir /user/testuser/data But when I tried to create /user/testuser/data through mkdirs using API it failed with below error . Error: Java::OrgApacheHadoopSecurity::AccessControlException: Permission denied: user=testuser, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x drwxrwxrwx - testuser testuser 0 2018-04-23 21:51 /user/ testuser drwxrwxrwx - testuser1 testuser1 0 2018-04-18 04:17 /user/ testuser1 drwx------ - hbase hbase 0 2018-04-19 00:25 /hbase drwxrwxrwt - hdfs supergroup 0 2017-08-10 11:18 /tmp drwxrwxrwx - hdfs supergroup 0 2018-04-23 21:51 /user Please let me know for any further information required to debug this issue . Thanks, Khirod
... View more
Labels:
12-13-2016
01:46 AM
I guess I did my trick to get the zookeeper_namespace . I wrote a small piece of code to connect zookeeper and get the childrens from zookeeper then match with hive server with port number . Ex : from the below output , I have 2 namespace for hive , but my server match with the 1st one. That I need 🙂 {:req_id=>3, :rc=>0, :children=> ["serverUri=lnxcdh23.testme.org:10000;version=1.1.0-cdh5.4.4;sequence=0000000001"], {:req_id=>2, :rc=>0, :children=>[], Thanks, Khirod
... View more
12-12-2016
09:09 AM
OR zooKeeperNamespace = { the Hive service name we choosed at the time of installation which is be default hiveserver2 } ?
... View more
12-12-2016
08:59 AM
I need zooKeeperNamespace for jdbc connection . The default value is "hiveserver2" . Can I get this value form any property from hive-site.xml I got . But this is different . And could not connect through jdbc. It throws Unable to read HiveServer2 uri from ZooKeeper. Where as when I user hiveserver2 it is working fine. Let me know for any further information. <property> <name>hive.zookeeper.namespace</name> <value>hive_zookeeper_namespace_hive2</value> </property> Thanks, Khirod
... View more
Labels:
08-31-2016
03:02 AM
Is that a custom jar file and want to put in separate file system in each node? if yes you can distribute it through parcel . Let me know for any further information. Thanks, Khirod
... View more
08-31-2016
12:20 AM
login to cloudera manager => select Hdfs => Configuration => click on log menu option (left bottom) Here are the properties you may need to update : 1) Maximum Audit Log File Size => check the size you want to set 2) Number of Audit Logs to Retain => Number of backup file you want to retain. Thanks, Khirod
... View more
08-30-2016
11:52 PM
In CDH-5.7.2-1.cdh5.7.2.p0.18 symlink htrace-core4.jar exist in ../CDH/lib/hadoop/client . Not sure why this time it is version specific . I guess it should be htrace-core.jar instead of htrace-core4.jar. As it has been a practice to use the generic jar name points to lastest available vesion. Please somebody from support team check and let me know if it required any further information. Regards, Khirod
... View more
02-16-2016
10:49 AM
Thank you Benjamin for the update . I guess hortonwork should have some way to create custom stacks and deploy it through Ambari resource manager . I checked through register version , I could not get any option to add any custom stack and its remote repository. Please help . Regards, Khirod
... View more
02-16-2016
09:20 AM
2 Kudos
What is the alternatives of cloudera parcel in HortonWorks? any link or docs to the related topic . Thanks, Khirod
... View more
10-20-2015
11:40 AM
Thanks Sue, for help. I will try Hadoop Streaming and update here how it goes. -Khirod
... View more
10-19-2015
10:49 PM
sure thing. Here is the new thread. http://community.cloudera.com/t5/Batch-Processing-and-Workflow/Execute-Shell-script-through-oozie-job-in-all-node/m-p/33136#U33136 Regards Khirod
... View more
10-19-2015
10:48 PM
I tried to execute shell script through an oozie job , It seems it just executed in jobTracker host not in other nodes. I am expecting the script to be executed in all the nodes. did I required any other specific configuration . Or I missed anything here. workflow.xml ------------------------------ <workflow-app name="script_oozie_job" xmlns="uri:oozie:workflow:0.3"> <start to='Test' /> <action name="Test"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>CopyFiles.sh</exec> <argument>${argument1}</argument> <file>hdfs://nameNode-host:8020/user/oozie/script/script_oozie_job/CopyFiles.sh#CopyFiles.sh</file> </shell> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Script failed</message> </kill> <end name='end' /> job.properties --------------------------------- nameNode=hdfs://nameNode-host:8020 jobTracker=jobTacker-host:8032 queueName=default argument1="" oozie.wf.application.path= hdfs://nameNode-host:8020/user/oozie/script/script_oozie_job Regards -Khirod
... View more
Labels:
10-19-2015
04:55 AM
I tried to execute shell script through an oozie job , It seems it just executed in jobTracker host not in other nodes. I am expecting the script to be executed in all the nodes. did I required any other specific configuration . Or I missed anything here. workflow.xml ------------------------------ <workflow-app name="script_oozie_job" xmlns="uri:oozie:workflow:0.3"> <start to='Test' /> <action name="Test"> <shell xmlns="uri:oozie:shell-action:0.1"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> </configuration> <exec>CopyFiles.sh</exec> <argument>${argument1}</argument> <file>hdfs://nameNode-host:8020/user/oozie/script/script_oozie_job/CopyFiles.sh#CopyFiles.sh</file> </shell> <ok to="end"/> <error to="fail"/> </action> <kill name="fail"> <message>Script failed</message> </kill> <end name='end' /> job.properties --------------------------------- nameNode=hdfs://nameNode-host:8020 jobTracker=jobTacker-host:8032 queueName=default argument1="" oozie.wf.application.path= hdfs://nameNode-host:8020/user/oozie/script/script_oozie_job Regards -Khirod
... View more
10-19-2015
12:20 AM
That I thought , oozie job may fulfills my requirement . Will update here how it goes !! Regards Khirod
... View more
10-16-2015
04:20 AM
Some verification required : 1) Is your kerberos workstation configured with kdc properly? 2) Check the nameNode host and validate the connectivity from the client node 3) Refresh kerboeros credential session , or try to use with keytab file. I hope the above verification may hunt your exact issue. Regards -Khirod
... View more
10-15-2015
09:35 PM
Just curious to know if the parcel_env.sh is executed by shell or it just used to source the file to set up the required environment variables? My requirement is I have some config files and some jar file which need to be configure and place in proper path. Can I guess if I can do it through a Oozie job executor, not sure how about this idea. If something possible in parcel distribution only would be great. Regards Khirod
... View more
10-12-2015
04:45 AM
In parcel.json we have a scripts section, where I have defined all required class paths . My question was is this script called by cloudera manager at the time of distributing parcels ? Or is there any other way/process to call this script? Not sure why ! In my case the defined script never executed by cloudera manager when deploying my custom parcel. " scripts " : {
" defines " : "myparcel _env.sh "
},
Regards Khirod
... View more
10-08-2015
12:35 PM
10-08-2015
12:29 PM
Hi GautamG, I have already gone through the given link. I just curios to know , is there any way to validate if PATH set done properly or not? before I got a ClassNotFoundException? Regards Khirod
... View more
10-07-2015
02:18 PM
my remote custom parcel dirtibuted and activated successfully , But in parcel usages section it show "ParcelName ( Active, 0 )" . did I missed any thing in configuration part ?, How can I check ? , Please help. Regards Khirod
... View more
10-07-2015
12:33 PM
Hi eggo, I think the issue is with " Credentials cache file '/tmp/krb5cc_10029' not found" . You may follow the below steps to create a kerberos cerdentials cache . I hope your kerberos workstation configured properly. 1) check if there is any active session for Credentials cache. used the below command. klist 2) If any active session does not exist for your host . then creat one. kinti <user>@<relam name> Then try to login . -Khirod
... View more
10-07-2015
12:07 PM
Hi, I have remotely setup a custom parcel and successfully activated . I am curious to know how and when cloudera manager execute the ../meta/parcel_env.sh file. In my case I tried to set some env variable in parcel_env.sh . but I could not find them . Please help. Kind Regards Khirod
... View more
10-07-2015
09:19 AM
So Finaly remote parcel fullfill my requirements . Thanks GauthamG for help and guide. Regards Khirod
... View more
10-05-2015
11:32 PM
Thanks GoutamG for a quick reply, So you suggest I should stick to the suffix metioned in url ? Is there any issue or sideeffect if I choose any different suffix ? Please suggest. Regards Khirod
... View more
10-05-2015
10:37 PM
Hi, Please suggest " Is Parcel distro suffix is fixed in custom parcel" ? like : for a Redhat 6 OS system : MYPARCEL-1.0.0-khirod-01-el6.parcel Or , I can use any custom disto suffix in the above case . like : MYPARCEL-1.0.0-khirod-01-< something_else> .parcel Kind Regards Khirod
... View more
10-05-2015
12:56 AM
Hi TrevorG, I think underscore should not be a probelm . check the output given below. hdfs dfs -mkdir -p /user/hive/warehouse/original_access_logs hdfs dfs -ls /user/hive/warehouse Found 1 items drwxr-xr-x - eip hive 0 2015-10-05 13:23 /user/hive/warehouse/original_access_logs -Khirod
... View more
10-04-2015
09:17 AM
One more thing, I think remote parcel would be helpful. Is there any cdh upgrade impact on this? Thanks Khirod
... View more
10-04-2015
09:04 AM
Hi Goutham, I tried with a custom parcel, but it would be always required some extra effort in upgrade as you suggested too, and it may be a caution to play with main cdh parcel. One more burden is , if I want to just add a required jar file in between , I have to create a new parcel again , which is not feasible. And please suggest if I do have some jar files only which I may need to copy to "/opt/cloudera/parcels/CDH/lib/hadoop-yarn/lib" only on each node . Let I have a cluster having 4 nodes , and I want all the jar file in "/opt/cloudera/parcels/CDH/lib/hadoop-yarn/lib" on each node. Initially I thought to scp to each node, but seems this is not an effective way. So I guess, we may have some thing/or any exposed api just to transfer these files to cloudera manager and manager will takes care to put those files in respective place. CLASS_PATH location is also good idea, but it is not specific to the YARN_SERVER only, I need this jar on each and every node, Please suggest and guide. Regards Khirod
... View more