Member since
11-18-2014
196
Posts
18
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8628 | 03-16-2016 05:54 AM | |
3974 | 02-05-2016 04:49 AM | |
2828 | 01-08-2016 06:55 AM | |
16264 | 09-29-2015 01:31 AM | |
1714 | 05-06-2015 01:50 AM |
02-23-2021
04:41 AM
Yes, you can download the hdfs client configuration from Cloudera Manager, but this is not possible always, when you are working on different department or any bureaucratic issue... And if you make any change HDFS configuration, you must download this configuration again. Is not a scalable solution in a big environments, the best solution is working on the same cluster (gateway if is possible), but if you have an external Flume Agents, there no exist a properly and scalable solution I think.
... View more
10-28-2020
04:48 AM
Hi What i have seen is tht the share option only gives you read or read+modify permisson. There is nothing as such execute? If i give read+modify other users will be ale to run the oozie workflow. I have seen it does not happen. As the permission on the underlying hdfs folder for the workflow is only for my user and it does not get modified. drwxrwx--- - kuaksha hue 0 2020-10-28 10:42 /user/hue/oozie/workspaces/hue-oozie-1520605312.96 Please elaborate and help. Regards Akshay
... View more
09-03-2019
08:39 AM
Hi, For me it's happening while finding YARN logs using yarn logs command. The /tmp/logs directory is having below permissions. drwxrwxrwt - yarn yarn But inside logs directory, the directories are having permission as below. drwxrwxrwt - user/owner_of_directory yarn Your guidance will be helpful. Thanks,
... View more
12-19-2018
03:59 PM
Hi Ben, But if we cannot find that file, what should we do? Why these files are missing? Thanks, Mo
... View more
12-10-2018
02:07 AM
Hi. How can you find your workflow from list of all workflows? They are nameless...
... View more
09-27-2018
06:55 AM
@ramarov Thank you for the suggestion! We'll keep it in mind for our future sprint updates.
... View more
12-17-2017
12:05 PM
Thank you for you ansver Gautam Gopalakrishnan. And for your question, Alina Gherman. I also thought it was a problem, when the gateway is always N/A. And tryed to fixed it, but unsuccessfully.
... View more
11-09-2017
09:42 AM
Hi, as usual, it depends of what you need... The cloudera VM has 1 node with everything and it allows you to see it... A quite simple cluster could have 2..3 MVs for CM & masters and at least 3 VMs for Workers. As I said and as you can imagine, it depends what you want to test on it Believe me, you really need a Cloudera admin to get what you want... In another thread refered to this blog I hope, this will help you
... View more
04-18-2017
11:10 PM
i have an error after that step hive> insert overwrite table hbase_table_2 select concat(ename,":",eno)as key,ename,eno,eage,esalary from emp; Query ID = usr_20170419111919_299ad34e-a846-45fa-8334-84291d3dc9f1 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Job running in-process (local Hadoop) Hadoop job information for Stage-0: number of mappers: 0; number of reducers: 0 2017-04-19 11:20:15,620 Stage-0 map = 0%, reduce = 0% Ended Job = job_local1415977764_0001 with errors Error during job, obtaining debugging information... Job Tracking URL: http://localhost:8080/ FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-0: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec
... View more