Member since
01-19-2015
25
Posts
0
Kudos Received
0
Solutions
12-08-2017
04:37 AM
Hi, AFAIK `yarn logs` command could be used to view aggregated logs of finsihed YARN applications. For ones that not finished yet, you had to either use YARN UI or ssh to node managers. However on Hortonworks page I see their yarn logs works for running apps as well already: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_yarn-resource-management/content/ch_yarn_cli_view_running_applications.html Is there any plan/way to make it work for cloudera as well? Thanks!
... View more
Labels:
- Labels:
-
Apache YARN
02-09-2015
09:37 AM
Good read! Administrators are sometimes surprised that modifying /etc/hadoop/conf and then restarting HDFS has no effect Oh yes. Ok, I think I understand now where server-side configuration comes from. I can find it in CM, altough I still have a bit problem with finding it on a filesystem. When I go to my namenode I see this: root@node9:/var/run/cloudera-scm-agent/process# ls -qlrt | grep NAME
drwxr-x--x 3 hdfs hdfs 420 Sep 28 12:40 4035-hdfs-NAMENODE
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:17 4091-hdfs-NAMENODE-refresh
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:18 4092-hdfs-NAMENODE-monitor-decommissioning
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:24 4097-hdfs-NAMENODE-refresh
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:25 4098-hdfs-NAMENODE-monitor-decommissioning
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:25 4100-hdfs-NAMENODE-refresh
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:30 4149-hdfs-NAMENODE-refresh
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:30 4150-hdfs-NAMENODE-monitor-decommissioning
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:30 4152-hdfs-NAMENODE-refresh
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:46 4167-hdfs-NAMENODE-createdir
drwxr-x--x 3 hdfs hdfs 420 Sep 28 13:50 4185-hdfs-NAMENODE
drwxr-x--x 3 hdfs hdfs 420 Jan 26 16:19 4785-hdfs-NAMENODE-refresh
drwxr-x--x 3 hdfs hdfs 420 Jan 26 16:19 4787-hdfs-NAMENODE-monitor-decommissioning Soo, which of these contains hdfs-site.xml of my currently running NameNode?
... View more
02-06-2015
01:40 PM
Got it. We will go this way, ironically it turned out that due to some regulatory stuff, downloading raw data from our system shouldn't bee too easy, so... we are going for good old 'it's not a bug, it's a feature' 😉 FYI, i also tried this : beeline -u jdbc:hive2://hname:10000 -n bla -p bla -f query.q > results.txt but it didn't do much, just hanged. Maybe hive2 (or beeline?) isn't powerful enough as well. Thanks for all the clarifications!
... View more
02-06-2015
01:08 PM
While i am here, you could also bold subjects of unread messages in the inbox (or mark them somehow). I was having "2 unread messages", but had no idea which ones are those...
... View more
02-06-2015
01:06 PM
Well, this is embarassing - I just saw in my cluster Cloudera Manager that security (dfs.permissioning) is false... That explains everything. HOWEVER: the reason i was confused is I couldn't see this property set in any of conf files (grep dfs.permissions /etc/hadoop/conf/*.xml). And according to documentation the default value is true. Could anyone please let me know then where does this property get overriden?
... View more
01-28-2015
11:23 AM
I see. Maybe then there should be also some option like "execute and save to hdfs", where Hue doesnt dump results to the browser, but puts them in one file in HDFS directly? So user can get it by other means? I recently managed to store results and then download 600 MB csv file in HDFS using Hue and it kinda worked (9 milions lines, new record). Altough few minutes the service went down (not sure if because of it, or because i just started presenting Hue to my boss) so not sure if this would work. I guess we gonna instructl users to always use LIMIT clause on their quiries, telling that this is to avoid overloading our servers (which is technically true). Thanks for your help!
... View more
01-28-2015
11:13 AM
How? I see there is some "options" and arrow next to it, but when i click it, it just scrolls me to the top of the page and nothing happens
... View more