Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Who agreed with this solution

avatar
Expert Contributor

i have got resolve this issue, true be told, about hdfs, yarn or mapred, i know it's kept from submitting jobs in default, but i think you also know, min.user.id and allow user list are for this case, so the issue is not about user or job.

 

i have montiored many times, when just 1 container start, it's dead automaticlly after secs, but when the normal state, basicly it will invoke 3-4 containers in my env.  so i can sure this issue is about cotainer can't work normal.

 

but why ?  as i said it's just one container has been start normally, so i can check this container log, but can't find nothing, the errors like what i have shown in the above.  and i also explain when the sqoop execute normally, it will create a directory in the usercache directory, but when sqoop job failed, it won't, so i guess maybe this directory has some problems, but of course, i don't know the exact reason.

 

then i delete namenode HA, just leave one namenode and one secondary namenode as default, then start sqoop again, it's failed too, but at this time, the log is more readable,  "NOT INITALIZE CONTAINER" error show to me. this logs make me more confidential, it's really because job can't invoke container.

 

at last, i stop all the cluster, delete /yarn/*   in datanode and namenode, then start all cluster, it works fine now. 

 

currently, i still don;t know why hdfs or yarn can't invoke container, but the problem has been resolved.

 

 

 

 

View solution in original post

Who agreed with this solution