Support Questions

Find answers, ask questions, and share your expertise

Could not locate executable null\bin\winutils.exe

avatar

Hi,

I follow the tutorial but i get this error

ERROR Shell:397 - Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

How can i fix it ?

Thks

6 REPLIES 6

avatar
Super Collaborator

Which tutorial were you following? Which step?
If possible, please provide some screen shots.

avatar
Master Mentor

@Moustapha MOUSSA SALEY

Seems in your Windows machine you are missing the winutil.exe. Can you try this:

1. Download winutils.exe from http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe.
2. Set your HADOOP_HOME environment variable on the OS level to the full path to the bin folder with winutils.

avatar
New Contributor

 

I'm not able to download the winutils.exe file from http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe, and getting below error. 

<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>7JRY6HNAVEEYDSGE</RequestId>
<HostId>ko6dy9bR6sbvK2P7g5of5qZkab+HH2JHkdwJyr0p1jyfMNdqnelijUbK6lZ/dfqI+7tK2eAsGg8=</HostId>
</Error>

avatar

Hi @Winne

Your question went into a thread that was over three years old. You would have a better chance of receiving a prompt and satisfactory resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.

 

 

Bill Brooks, Community Moderator
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar

Hi, I follow this tutorial
https://fr.hortonworks.com/tutorial/setting-up-a-spark-development-environment-with-python/
I install
1.sandbox
2.Python
3.Pycham
4.Download and Save Dataset

But by executing this code :

from pyspark importSparkContext,SparkConf conf =SparkConf().setAppName('MyFirstStandaloneApp')
sc =SparkContext(conf=conf) text_file = sc.textFile("./shakespeare.txt") counts = text_file.flatMap(lambda line: line.split(" ")) \.map(lambda word:(word,1)) \.reduceByKey(lambda a, b: a + b)print("Number of elements: "+ str(counts.count()))
counts.saveAsTextFile("./shakespeareWordCount")


I get this error

76420-error-pyspark.jpg

After that, I downloaded Winutilis and a create a folter C:\winutils\bin and i copied it Inside.

Second i edited the enviroment variable by creating Hadoop_home and this path : C:\winutils\bin (See the picture

76421-edit-variable.jpg

I rexecute the code and i have the same error…:(

avatar
New Contributor

I got the same problem. It looks like it appends bin to the HADOOP_HOME path. So need to set HADOOP_HOME=C:\winutils instead of C:\winutils\bin