Support Questions
Find answers, ask questions, and share your expertise

set compression library path for mapreduce

set compression library path for mapreduce

New Contributor

Hi all,

I am trying to use my own compression library for Mapred in Hadoop, and already set = true, =

in mapred-site.xml

But I want to use my own zlib ( implemented more efficiently ).

I already set LD_LIBRARY_PATH to my_libz, and hadoop does see it by "hadoop checknative"

However, Mapreduce does not seem to follow LD_LIBRARY_PATH, it still uses Linux system's Of course if I "ln -s -f", Mapred has to use my_libz. This is, however, not I intend, as I want only Hadoop to use, not other appplications. If I remove system's, following errors are generated:

	Diagnostics: Exception from container-launch.
	Container id: container_1547233003817_0045_02_000001
	Exit code: 127
	Stack trace: ExitCodeException exitCode=127:
	   at org.apache.hadoop.util.Shell.runCommand(
	   at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
	   at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(
at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at Container exited with a non-zero exit code 127 Failing this attempt. Failing the application. 19/01/11 11:49:59 INFO mapreduce.Job: Counters: 0

I can, of course, recompile hadoop to build to point to, but again this is not a solution as there are too many different Linux environments in my hadoop cluster and too troublesome to build and distribute

So a preferable solution would be to change some environment variable of Shell and/or Hadoop, through configuration files, without recompilation.

Any ideas or suggestions to accomplish this goal?

Many thanks in advance for any help.

Don't have an account?