<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Zeppelin integrated with CDH 5.8.3 is failing with class not found exceptions in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Zeppelin-integrated-with-CDH-5-8-3-is-failing-with-class-not/m-p/52556#M57870</link>
    <description>&lt;P&gt;Hi Cloudera,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have integrated Zeppelin with CDH 5.8.3 and tried running few pig examples as in Zeppelin tutorial and ran in to few issues.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;PIG Code used:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;%pig&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;bankText = load 'hdfs://nameservice1/user/rbodolla/emp.txt' using PigStorage(';');&lt;BR /&gt;bank = foreach bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, $5 as balance;&lt;BR /&gt;bank = filter bank by age != '"age"';&lt;BR /&gt;bank = foreach bank generate (int)age, job, marital, (int) balance;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;%pig.query&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;bank_data = filter bank by age &amp;lt; ${maxAge=40};&lt;BR /&gt;b = group bank_data by age;&lt;BR /&gt;foreach b generate group, COUNT($1);&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Issue 1:&lt;/STRONG&gt; Zeppelin does not support KMS HA. After removing KMS HA the issue has been resolved.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Issue 2:&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;INFO [2017-03-23 07:21:35,013] ({pool-2-thread-5} JobControlCompiler.java[getJob]:579) - This job cannot be converted run in-process&lt;BR /&gt;INFO [2017-03-23 07:21:35,019] ({pool-2-thread-5} Configuration.java[warnOnceIfDeprecated]:1049) - mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication&lt;BR /&gt;&lt;U&gt;ERROR [2017-03-23 07:21:35,071] ({pool-2-thread-5} Job.java[run]:188) - Job failed&lt;/U&gt;&lt;BR /&gt;&lt;U&gt;java.lang.NoSuchMethodError:&lt;/U&gt; org.codehaus.jackson.map.ObjectMapper.writerWithDefaultPrettyPrinter()Lorg/codehaus/jackson/map/ObjectWriter;&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.writeJson(KMSClientProvider.java:211)&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:448)&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:439)&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:712)&lt;BR /&gt;at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.shipToHDFS(JobControlCompiler.java:1805)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.putJarOnClassPathThroughDistributedCache(JobControlCompiler.java:1686)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:648)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:323)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:196)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:308)&lt;BR /&gt;at org.apache.pig.PigServer.launchPlan(PigServer.java:1474)&lt;BR /&gt;at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1459)&lt;BR /&gt;at org.apache.pig.PigServer.storeEx(PigServer.java:1118)&lt;BR /&gt;at org.apache.pig.PigServer.store(PigServer.java:1081)&lt;BR /&gt;at org.apache.pig.PigServer.openIterator(PigServer.java:994)&lt;BR /&gt;at org.apache.zeppelin.pig.PigQueryInterpreter.interpret(PigQueryInterpreter.java:104)&lt;BR /&gt;at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)&lt;BR /&gt;at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:489)&lt;BR /&gt;at org.apache.zeppelin.scheduler.Job.run(Job.java:175)&lt;BR /&gt;at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)&lt;BR /&gt;at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:745)&lt;BR /&gt;INFO [2017-03-23 07:21:35,079] ({pool-2-thread-5} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1490253694068 finished by scheduler org.apache.zeppelin.pig.PigInterpreter363704768&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 11:19:07 GMT</pubDate>
    <dc:creator>rbodolla</dc:creator>
    <dc:date>2022-09-16T11:19:07Z</dc:date>
    <item>
      <title>Zeppelin integrated with CDH 5.8.3 is failing with class not found exceptions</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Zeppelin-integrated-with-CDH-5-8-3-is-failing-with-class-not/m-p/52556#M57870</link>
      <description>&lt;P&gt;Hi Cloudera,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have integrated Zeppelin with CDH 5.8.3 and tried running few pig examples as in Zeppelin tutorial and ran in to few issues.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;PIG Code used:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;%pig&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;bankText = load 'hdfs://nameservice1/user/rbodolla/emp.txt' using PigStorage(';');&lt;BR /&gt;bank = foreach bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, $5 as balance;&lt;BR /&gt;bank = filter bank by age != '"age"';&lt;BR /&gt;bank = foreach bank generate (int)age, job, marital, (int) balance;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;%pig.query&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;bank_data = filter bank by age &amp;lt; ${maxAge=40};&lt;BR /&gt;b = group bank_data by age;&lt;BR /&gt;foreach b generate group, COUNT($1);&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Issue 1:&lt;/STRONG&gt; Zeppelin does not support KMS HA. After removing KMS HA the issue has been resolved.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Issue 2:&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;INFO [2017-03-23 07:21:35,013] ({pool-2-thread-5} JobControlCompiler.java[getJob]:579) - This job cannot be converted run in-process&lt;BR /&gt;INFO [2017-03-23 07:21:35,019] ({pool-2-thread-5} Configuration.java[warnOnceIfDeprecated]:1049) - mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication&lt;BR /&gt;&lt;U&gt;ERROR [2017-03-23 07:21:35,071] ({pool-2-thread-5} Job.java[run]:188) - Job failed&lt;/U&gt;&lt;BR /&gt;&lt;U&gt;java.lang.NoSuchMethodError:&lt;/U&gt; org.codehaus.jackson.map.ObjectMapper.writerWithDefaultPrettyPrinter()Lorg/codehaus/jackson/map/ObjectWriter;&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.writeJson(KMSClientProvider.java:211)&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:448)&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:439)&lt;BR /&gt;at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:712)&lt;BR /&gt;at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1358)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1457)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1442)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:400)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.shipToHDFS(JobControlCompiler.java:1805)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.putJarOnClassPathThroughDistributedCache(JobControlCompiler.java:1686)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:648)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:323)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:196)&lt;BR /&gt;at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:308)&lt;BR /&gt;at org.apache.pig.PigServer.launchPlan(PigServer.java:1474)&lt;BR /&gt;at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1459)&lt;BR /&gt;at org.apache.pig.PigServer.storeEx(PigServer.java:1118)&lt;BR /&gt;at org.apache.pig.PigServer.store(PigServer.java:1081)&lt;BR /&gt;at org.apache.pig.PigServer.openIterator(PigServer.java:994)&lt;BR /&gt;at org.apache.zeppelin.pig.PigQueryInterpreter.interpret(PigQueryInterpreter.java:104)&lt;BR /&gt;at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)&lt;BR /&gt;at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:489)&lt;BR /&gt;at org.apache.zeppelin.scheduler.Job.run(Job.java:175)&lt;BR /&gt;at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)&lt;BR /&gt;at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:266)&lt;BR /&gt;at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)&lt;BR /&gt;at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:745)&lt;BR /&gt;INFO [2017-03-23 07:21:35,079] ({pool-2-thread-5} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1490253694068 finished by scheduler org.apache.zeppelin.pig.PigInterpreter363704768&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 11:19:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Zeppelin-integrated-with-CDH-5-8-3-is-failing-with-class-not/m-p/52556#M57870</guid>
      <dc:creator>rbodolla</dc:creator>
      <dc:date>2022-09-16T11:19:07Z</dc:date>
    </item>
    <item>
      <title>Re: Zeppelin integrated with CDH 5.8.3 is failing with class not found exceptions</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Zeppelin-integrated-with-CDH-5-8-3-is-failing-with-class-not/m-p/52571#M57871</link>
      <description>&lt;P&gt;&lt;SPAN&gt;​This has been resolved after adding latest&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;jackson-core-2.5.3.jar,&lt;/SPAN&gt;&lt;SPAN&gt;jackson-core-asl-1.9.13.jar,&lt;/SPAN&gt;&lt;SPAN&gt;jackson-mapper-asl-1.9.13.jar to pig interpreter path. By default, Zeppelin doesn't copy these latest files.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Mar 2017 11:28:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Zeppelin-integrated-with-CDH-5-8-3-is-failing-with-class-not/m-p/52571#M57871</guid>
      <dc:creator>rbodolla</dc:creator>
      <dc:date>2017-03-23T11:28:43Z</dc:date>
    </item>
  </channel>
</rss>

