Member since
04-09-2018
6
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1131 | 04-10-2018 03:09 PM | |
1922 | 04-09-2018 05:01 PM |
04-10-2018
03:09 PM
Actually I think I manually found the logs for stdout now: I copied the application id from the Hadoop management website and executed the command yarn logs -applicationId [applicationId] on command line and there I found what I was looking for (a better log :). For the part I was looking for it said below the log: End of LogType:stdout
***********************************************************************
Container: container_e03_1523366722691_0006_01_000002 on ubuntu_45454
LogAggregationType: AGGREGATED
Can I find this type of log not in the web interface then? Or do I need to check in a different web interface? Edit: Looks like Hadoop Web interface only knew about attempt Nr. 1 in the job history for the MapReduce job (see the 000002 in the excerpt above). When I manually change the URL of the container ID to 00002 (e.g. ... container_e03_1523366722691_0006_01_000002/job_1523366722691_0006/admin), I can also see the log in the web interface. Confusing 🙂 anyway, it looks like I used the wrong arguments somehow for Sqoop, even though I'm quite sure it worked this way in the console. Problem solved (for now).
... View more
04-09-2018
05:01 PM
I managed to fix it myself 🙂 looks like the reverse DNS lookup does/did not work correctly in my virtual machine which I used for testing. Setting the config setting hadoop.proxyuser.oozie.hosts to * (for testing!) fixed the problem (before it was the hostname). I guess this had to with the IP address 127.0.0.1 being mapped to localhost and not to the hostname of my machine, which is mapped to 127.0.1.1.
... View more