Member since
08-16-2018
16
Posts
0
Kudos Received
0
Solutions
09-14-2018
08:04 PM
@Jay Kumar SenSharma 1. I think the correct ssh port is being used while registering the hosts as I am able to ssh to my host from the sandbox using port 22. 2. On my sandbox: ls -l ~/.ssh/ -rw-r--r-- 1 root root410 Sep 13 22:55 authorized_keys -rw------- 1 root root 1675 Sep 13 22:48 id_rsa -rw-r--r-- 1 root root410 Sep 13 22:48 id_rsa.pub -rw-r--r-- 1 root root798 Sep 13 22:53 known_hosts ls -ld ~/.ssh/ drwx------ 1 root root 4096 Sep 13 22:55 /root/.ssh/ 3. On sandbox: hostname -f sandbox.hortonworks.com cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.2 sandbox.hortonworks.com 10.99.162.xxx sahil-virtual-machine 10.99.162.xxx sandbox.hortonworks.com On the host: hostname -f sahil-virtual-machine cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 sahil-virtual-machine 10.99.162.xxx sahil-virtual-machine 10.99.162.xxx sandbox.hortonworks.com # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Also I am able to ssh to my ubuntu machine using hostname (sahil-virtual-machine) however I am unable to ssh to the sandbox using the hostname (sandbox.hortonworks.com) This is just for learning purposes as I am new to all this. Your help is very much appreciated.
... View more
09-14-2018
01:40 AM
I have a running hortonworks sandbox and I am trying to add a new host (ubuntu) to it. I have assigned ip addresses to each of the machines and I am also able to ssh into both the sandbox and the ubuntu machine remotely. I have also added the public key of sandbox to my host using https://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/. So I am also able to login to my host (ubuntu) from the sandbox without any password. However, when I try to add the host using the ambari UI, I get an error. ==========================
Creating target directory...
==========================
Command start time 2018-09-13 21:45:39
Permission denied (publickey,password).
SSH command execution finished
host=10.99.162.xxx, exitcode=255
Command end time 2018-09-13 21:45:40
ERROR: Bootstrap of host 10.99.162.xxx fails because previous action finished with non-zero exit code (255)
ERROR MESSAGE: Permission denied (publickey,password).
STDOUT:
Permission denied (publickey,password).
... View more
Labels:
- Labels:
-
Apache Ambari
09-04-2018
06:55 PM
@Aditya Sirna I do not wish to setup the HDP cluster. I just wanted to deploy a standalone sandbox on a remote machine. I was able to access all the services through the ambari UI a couple of days ago, however, I am getting this screen now whenever I login to my sanbbox. I remember resetting the admin password and restarting the ambari services before this issue.
... View more
09-04-2018
05:41 PM
I have set up the sandbox on a machine, and I am able to ssh into the machine using a jump host, however, when I try to login using the ambari UI, I get the following screen with no access to the dashboard or any of the features. I am also not able to login with any other user (like maria_dev) except admin, however, ssh and everything else works for every user. Also, I am able to ssh for all the users only by specifying the port -p 2222 and ssh doesn't work too if I don't specify the port.
... View more
Labels:
- Labels:
-
Apache Ambari
08-28-2018
01:38 PM
Error from the oozie web console: 2018-08-23 18:49:03,792 WARN ActionStartXCommand:523 - SERVER[sandbox-hdp.hortonworks.com] USER[maria_dev] GROUP[-] TOKEN[] APP[old-movies] JOB[0000000-180823180623663-oozie-oozi-W] ACTION[0000000-180823180623663-oozie-oozi-W@sqoop-node] Error starting action [sqoop-node]. ErrorType [TRANSIENT], ErrorCode [ JA006], Message [ JA006: Call From sandbox-hdp.hortonworks.com/172.18.0.2 to sandbox-hdp.hortonworks.com:8050 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused]
org.apache.oozie.action.ActionExecutorException: JA006: Call From sandbox-hdp.hortonworks.com/172.18.0.2 to sandbox-hdp.hortonworks.com:8050 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:457)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:437)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1258)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1440)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:234)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:65)
at org.apache.oozie.command.XCommand.call(XCommand.java:287)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:331)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:260)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Call From sandbox-hdp.hortonworks.com/172.18.0.2 to sandbox-hdp.hortonworks.com:8050 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor74.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy33.getDelegationToken(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getDelegationToken(ApplicationClientProtocolPBClientImpl.java:310)
at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:290)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:202)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:184)
at com.sun.proxy.$Proxy34.getDelegationToken(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getRMDelegationToken(YarnClientImpl.java:550)
at org.apache.hadoop.mapred.ResourceMgrDelegate.getDelegationToken(ResourceMgrDelegate.java:176)
at org.apache.hadoop.mapred.YARNRunner.getDelegationToken(YARNRunner.java:232)
at org.apache.hadoop.mapreduce.Cluster.getDelegationToken(Cluster.java:401)
at org.apache.hadoop.mapred.JobClient$16.run(JobClient.java:1240)
at org.apache.hadoop.mapred.JobClient$16.run(JobClient.java:1237)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.mapred.JobClient.getDelegationToken(JobClient.java:1236)
at org.apache.oozie.service.HadoopAccessorService.addRMDelegationToken(HadoopAccessorService.java:525)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1217)
... 11 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1620)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 34 more
... View more
08-28-2018
03:03 AM
@Jay Kumar SenSharma I am working on deploying the sandbox in a lab, and I have a few range of ip addresses. What should I do in this case ?
... View more
08-28-2018
01:57 AM
I have restarted the VM. PFA the output: ssh maria_dev@10.99.162.xx
ssh: connect to host 10.99.162.xx port 22: No route to host cat /etc/sysconfig/network-scripts/ifcfg-ens33
# Generated by dracut initrd NAME="ens33"
DEVICE="ens33"
ONBOOT=yes
NETBOOT=yes
UUID="XXXXXXXXXX"
IPV6INIT=yes
BOOTPROTO=static
TYPE=Ethernet
IPADDR=10.99.162.XX
NETMASK=255.255.254.0
... View more
08-28-2018
01:42 AM
Hi, I followed those steps however I am unable to ssh into into it from a remote machine. ping doesn't work too. FYI, I have a ifcfg-ens33 file instead of the ifcfg-enp0s3
... View more
08-28-2018
01:06 AM
I have deployed the sandbox on a remote machine and I need to assign static ip address to the sandbox for remote access. How can I do that?
... View more
08-27-2018
09:15 PM
<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><name>dfs.journalnode.rpc-address</name><value>0.0.0.0:8485</value><source>hdfs-default.xml</source></property>
<property><name>yarn.ipc.rpc.class</name><value>org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.maxtaskfailures.per.tracker</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>yarn.client.max-cached-nodemanagers-proxies</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.speculative.retry-after-speculate</name><value>15000</value><source>mapred-default.xml</source></property>
<property><name>ha.health-monitor.connect-retry-interval.ms</name><value>1000</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.work-preserving-recovery.enabled</name><value>true</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round</name><value>0.1</value><source>programatically</source></property>
<property><name>dfs.client.mmap.cache.size</name><value>256</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.read-lock-reporting-threshold-ms</name><value>5000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.markreset.buffer.percent</name><value>0.0</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.data.dir</name><value>/hadoop/hdfs/data</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.max-age-ms</name><value>604800000</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.lazypersist.file.scrub.interval.sec</name><value>300</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.ubertask.enable</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.log-aggregation.compression-type</name><value>gz</value><source>programatically</source></property>
<property><name>dfs.namenode.replication.considerLoad</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>false</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.datestring.cache.size</name><value>200000</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.kms.client.authentication.retry-count</name><value>1</value><source>core-default.xml</source></property>
<property><name>hadoop.ssl.enabled.protocols</name><value>TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.retrycache.heap.percent</name><value>0.03f</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.top.window.num.buckets</name><value>10</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.scheduler.address</name><value>sandbox-hdp.hortonworks.com:8030</value><source>programatically</source></property>
<property><name>hadoop.http.cross-origin.enabled</name><value>false</value><source>core-default.xml</source></property>
<property><name>ssl.client.keystore.password</name><value>bigdata</value><source>programatically</source></property>
<property><name>dfs.client.file-block-storage-locations.num-threads</name><value>10</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.balance.bandwidthPerSec</name><value>6250000</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.proxy-user-privileges.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.decommission.max.concurrent.tracked.nodes</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.fetch.retry.enabled</name><value>1</value><source>programatically</source></property>
<property><name>io.mapfile.bloom.error.rate</name><value>0.005</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.resourcemanager.minimum.version</name><value>NONE</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.nodemanagers.heartbeat-interval-ms</name><value>1000</value><source>yarn-default.xml</source></property>
<property><name>fs.azure.user.agent.prefix</name><value>User-Agent: APN/1.0 Hortonworks/1.0 HDP/</value><source>programatically</source></property>
<property><name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name><value>${dfs.web.authentication.kerberos.principal}</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.http.cross-origin.allowed-headers</name><value>X-Requested-With,Content-Type,Accept,Origin</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.delete.debug-delay-sec</name><value>0</value><source>programatically</source></property>
<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value><source>programatically</source></property>
<property><name>dfs.namenode.write-lock-reporting-threshold-ms</name><value>1000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.read.shortcircuit.streams.cache.size</name><value>4096</value><source>programatically</source></property>
<property><name>dfs.image.transfer.bandwidthPerSec</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>yarn.scheduler.maximum-allocation-vcores</name><value>8</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.webapp.rest-csrf.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.address</name><value>sandbox-hdp.hortonworks.com:10200</value><source>programatically</source></property>
<property><name>yarn.webapp.xfs-filter.enabled</name><value>true</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb</name><value>1000</value><source>programatically</source></property>
<property><name>mapreduce.job.hdfs-servers</name><value>${fs.defaultFS}</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.task.profile.reduce.params</name><value>${mapreduce.task.profile.params}</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.fs-limits.min-block-size</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>ftp.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>dfs.client.use.legacy.blockreader.local</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.http.cross-origin.allowed-methods</name><value>GET,POST,HEAD</value><source>core-default.xml</source></property>
<property><name>dfs.short.circuit.shared.memory.watcher.interrupt.check.ms</name><value>60000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.directoryscan.threads</name><value>1</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3a.buffer.dir</name><value>${hadoop.tmp.dir}/s3a</value><source>core-default.xml</source></property>
<property><name>yarn.client.application-client-protocol.poll-interval-ms</name><value>200</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.leveldb-timeline-store.path</name><value>/hadoop/yarn/timeline</value><source>programatically</source></property>
<property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.edits.noeditlogchannelflush</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3a.fast.upload.buffer</name><value>disk</value><source>programatically</source></property>
<property><name>s3native.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
<property><name>yarn.client.failover-retries-on-socket-timeouts</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.sensitive-config-keys</name><value>
secret$
password$
ssl.keystore.pass$
fs.s3.*[Ss]ecret.?[Kk]ey
fs.s3a.*.server-side-encryption.key
fs.azure.account.key.*
credential$
oauth.*token$
hadoop.security.sensitive-config-keys
</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.startup.delay.block.deletion.sec</name><value>3600</value><source>programatically</source></property>
<property><name>dfs.webhdfs.user.provider.user.pattern</name><value>^[A-Za-z_][A-Za-z0-9._-]*[$]?$</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.webapp.rest-csrf.custom-header</name><value>X-XSRF-Header</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.tasktracker.tasks.sleeptimebeforesigkill</name><value>5000</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.client.retry-interval-ms</name><value>1000</value><source>programatically</source></property>
<property><name>dfs.encrypt.data.transfer.cipher.key.bitlength</name><value>128</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.entity-group-fs-store.with-user-dir</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>hadoop.http.authentication.type</name><value>simple</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.path.based.cache.refresh.interval.ms</name><value>30000</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name><value>/cgroup</value><source>programatically</source></property>
<property><name>mapreduce.local.clientfactory.class.name</name><value>org.apache.hadoop.mapred.LocalClientFactory</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.max.full.block.report.leases</name><value>6</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.cache.revocation.timeout.ms</name><value>900000</value><source>hdfs-default.xml</source></property>
<property><name>ipc.client.connection.maxidletime</name><value>30000</value><source>programatically</source></property>
<property><name>ipc.server.max.connections</name><value>0</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.recovery.store.leveldb.path</name><value>/hadoop/mapreduce/jhs</value><source>programatically</source></property>
<property><name>dfs.namenode.safemode.threshold-pct</name><value>1</value><source>programatically</source></property>
<property><name>fs.s3a.multipart.purge.age</name><value>86400</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.num.checkpoints.retained</name><value>2</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.webapp.xfs-filter.xframe-options</name><value>SAMEORIGIN</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.client.best-effort</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>fs.azure.authorization</name><value>false</value><source>core-default.xml</source></property>
<property><name>yarn.timeline-service.bind-host</name><value>0.0.0.0</value><source>programatically</source></property>
<property><name>mapreduce.job.ubertask.maxmaps</name><value>9</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.stale.datanode.interval</name><value>30000</value><source>programatically</source></property>
<property><name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name><value>90</value><source>programatically</source></property>
<property><name>mapreduce.tasktracker.http.address</name><value>0.0.0.0:50060</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.ifile.readahead.bytes</name><value>4194304</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobhistory.webapp.rest-csrf.enabled</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>yarn.sharedcache.uploader.server.thread-count</name><value>50</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobhistory.admin.address</name><value>0.0.0.0:10033</value><source>mapred-default.xml</source></property>
<property><name>s3.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
<property><name>dfs.block.access.token.lifetime</name><value>600</value><source>hdfs-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.resource.cpu-vcores</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.input.lineinputformat.linespermap</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.num.extra.edits.retained</name><value>1000000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.input.buffer.percent</name><value>0.7</value><source>programatically</source></property>
<property><name>hadoop.http.staticuser.user</name><value>dr.who</value><source>core-default.xml</source></property>
<property><name>mapreduce.reduce.maxattempts</name><value>4</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.search.filter.user</name><value>(&(objectClass=user)(sAMAccountName={0}))</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobhistory.admin.acl</name><value>*</value><source>mapred-default.xml</source></property>
<property><name>hadoop.workaround.non.threadsafe.getpwuid</name><value>false</value><source>core-default.xml</source></property>
<property><name>dfs.client.context</name><value>default</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.map.maxattempts</name><value>4</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.entity-group-fs-store.active-dir</name><value>/ats/active/</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.zk-retry-interval-ms</name><value>1000</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.cleaner.interval-ms</name><value>86400000</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.drop.cache.behind.reads</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.permissions.superusergroup</name><value>hdfs</value><source>programatically</source></property>
<property><name>yarn.application.classpath</name><value>$HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.bind-host</name><value>0.0.0.0</value><source>programatically</source></property>
<property><name>fs.s3n.block.size</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>hadoop.registry.system.acls</name><value>sasl:yarn@, sasl:mapred@, sasl:hdfs@</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.kill-escape.user</name><value>hive</value><source>programatically</source></property>
<property><name>dfs.namenode.list.cache.pools.num.responses</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.slow.io.warning.threshold.ms</name><value>300</value><source>hdfs-default.xml</source></property>
<property><name>yarn.sharedcache.store.in-memory.check-period-mins</name><value>720</value><source>yarn-default.xml</source></property>
<property><name>fs.s3a.multiobjectdelete.enable</name><value>true</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.bind-host</name><value>0.0.0.0</value><source>programatically</source></property>
<property><name>dfs.namenode.fs-limits.max-blocks-per-file</name><value>1048576</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.vmem-check-enabled</name><value>false</value><source>programatically</source></property>
<property><name>hadoop.security.authentication</name><value>simple</value><source>programatically</source></property>
<property><name>mapreduce.reduce.cpu.vcores</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>net.topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value><source>core-default.xml</source></property>
<property><name>fs.s3.sleepTimeSeconds</name><value>10</value><source>core-default.xml</source></property>
<property><name>dfs.datanode.peer.stats.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.ttl-ms</name><value>2678400000</value><source>programatically</source></property>
<property><name>yarn.sharedcache.root-dir</name><value>/sharedcache</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
<property><name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name><value>600000</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.heartbeats.in.second</name><value>100</value><source>mapred-default.xml</source></property>
<property><name>yarn.node-labels.fs-store.root-dir</name><value>/system/yarn/node-labels</value><source>programatically</source></property>
<property><name>hadoop.security.group.mapping.ldap.posix.attr.gid.name</name><value>gidNumber</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts</name><value>3</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name><value>hadoop-yarn</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.delegation-token.max-conf-size-bytes</name><value>12800</value><source>yarn-default.xml</source></property>
<property><name>s3.bytes-per-checksum</name><value>512</value><source>core-default.xml</source></property>
<property><name>hadoop.ssl.require.client.cert</name><value>false</value><source>core-default.xml</source></property>
<property><name>dfs.journalnode.http-address</name><value>0.0.0.0:8480</value><source>programatically</source></property>
<property><name>mapreduce.output.fileoutputformat.compress</name><value>false</value><source>programatically</source></property>
<property><name>fs.default.name</name><value>hdfs://sandbox-hdp.hortonworks.com:8020</value></property>
<property><name>dfs.ha.automatic-failover.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.metrics.logger.period.seconds</name><value>600</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name><value>false</value><source>programatically</source></property>
<property><name>mapreduce.shuffle.max.threads</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>mapred.job.tracker</name><value>http://sandbox-hdp.hortonworks.com:8050</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.webapp.rest-csrf.custom-header</name><value>X-XSRF-Header</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.invalidate.work.pct.per.iteration</name><value>0.32f</value><source>hdfs-default.xml</source></property>
<property><name>s3native.client-write-packet-size</name><value>65536</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.max-lock-hold-to-release-lease-ms</name><value>25</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.block.write.replace-datanode-on-failure.policy</name><value>DEFAULT</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.client.submit.file.replication</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.job.committer.commit-window</name><value>10000</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.audit.log.async</name><value>true</value><source>programatically</source></property>
<property><name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name><value>250</value><source>yarn-default.xml</source></property>
<property><name>ssl.client.truststore.password</name><value>bigdata</value><source>programatically</source></property>
<property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.acls.enabled</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.secondary.http-address</name><value>sandbox-hdp.hortonworks.com:50090</value><source>programatically</source></property>
<property><name>mapreduce.map.speculative</name><value>false</value><source>programatically</source></property>
<property><name>mapreduce.job.speculative.slowtaskthreshold</name><value>1.0</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.linux-container-executor.cgroups.mount</name><value>false</value><source>programatically</source></property>
<property><name>mapreduce.tasktracker.http.threads</name><value>40</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.jobtracker.webinterface.trusted</name><value>false</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.http.policy</name><value>HTTP_ONLY</value><source>programatically</source></property>
<property><name>fs.s3a.paging.maximum</name><value>5000</value><source>core-default.xml</source></property>
<property><name>hadoop.kerberos.min.seconds.before.relogin</name><value>60</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.nodemanager-connect-retries</name><value>10</value><source>yarn-default.xml</source></property>
<property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value><source>core-default.xml</source></property>
<property><name>io.native.lib.available</name><value>true</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.heartbeat.recheck-interval</name><value>300000</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.done-dir</name><value>/mr-history/done</value><source>programatically</source></property>
<property><name>hadoop.registry.zk.retry.interval.ms</name><value>1000</value><source>core-default.xml</source></property>
<property><name>mapreduce.job.reducer.unconditional-preempt.delay.sec</name><value>300</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.avoid.write.stale.datanode</name><value>true</value><source>programatically</source></property>
<property><name>dfs.namenode.checkpoint.txns</name><value>1000000</value><source>programatically</source></property>
<property><name>hadoop.ssl.hostname.verifier</name><value>DEFAULT</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.timeout</name><value>300000</value><source>programatically</source></property>
<property><name>yarn.nodemanager.disk-health-checker.interval-ms</name><value>120000</value><source>yarn-default.xml</source></property>
<property><name>adl.feature.ownerandgroup.enableupn</name><value>false</value><source>core-default.xml</source></property>
<property><name>dfs.journalnode.https-address</name><value>0.0.0.0:8481</value><source>programatically</source></property>
<property><name>hadoop.security.groups.cache.secs</name><value>300</value><source>core-default.xml</source></property>
<property><name>mapreduce.input.fileinputformat.split.minsize</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.sync.behind.writes</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.fail-fast</name><value>${yarn.fail-fast}</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.full.block.report.lease.length.ms</name><value>300000</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.proxyuser.hue.groups</name><value>*</value><source>programatically</source></property>
<property><name>ipc.server.tcpnodelay</name><value>true</value><source>programatically</source></property>
<property><name>mapreduce.shuffle.port</name><value>13562</value><source>programatically</source></property>
<property><name>hadoop.rpc.protection</name><value>authentication</value><source>core-default.xml</source></property>
<property><name>dfs.client.https.keystore.resource</name><value>ssl-client.xml</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.list.encryption.zones.num.responses</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>yarn.client.failover-proxy-provider</name><value>org.apache.hadoop.yarn.client.RequestHedgingRMFailoverProxyProvider</value><source>programatically</source></property>
<property><name>yarn.timeline-service.recovery.enabled</name><value>true</value><source>programatically</source></property>
<property><name>mapreduce.jobtracker.retiredjobs.cache.size</name><value>1000</value><source>mapred-default.xml</source></property>
<property><name>dfs.ha.tail-edits.period</name><value>60</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.drop.cache.behind.writes</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>fs.s3.maxRetries</name><value>4</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobtracker.address</name><value>http://sandbox-hdp.hortonworks.com:8050</value><source>programatically</source></property>
<property><name>hadoop.http.authentication.kerberos.principal</name><value>HTTP/_HOST@LOCALHOST</value><source>core-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.posix.attr.uid.name</name><value>uidNumber</value><source>core-default.xml</source></property>
<property><name>nfs.server.port</name><value>2049</value><source>hdfs-default.xml</source></property>
<property><name>yarn.resourcemanager.webapp.address</name><value>sandbox-hdp.hortonworks.com:8088</value><source>programatically</source></property>
<property><name>mapreduce.task.profile.reduces</name><value>0-2</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.client.max-retries</name><value>30</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.am.max-attempts</name><value>2</value><source>programatically</source></property>
<property><name>ssl.client.truststore.type</name><value>jks</value><source>programatically</source></property>
<property><name>nfs.dump.dir</name><value>/tmp/.hdfs-nfs</value><source>hdfs-default.xml</source></property>
<property><name>mapred.job.name</name><value>oozie:action:T=sqoop:W=old-movies:A=sqoop-node:ID=0000000-180823180623663-oozie-oozi-W</value></property>
<property><name>dfs.bytes-per-checksum</name><value>512</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.end-notification.max.retry.interval</name><value>5000</value><source>mapred-default.xml</source></property>
<property><name>ipc.client.connect.retry.interval</name><value>1000</value><source>core-default.xml</source></property>
<property><name>fs.s3a.multipart.size</name><value>67108864</value><source>programatically</source></property>
<property><name>yarn.app.mapreduce.am.command-opts</name><value>-Xmx200m -Djava.io.tmpdir=./tmp</value><source>programatically</source></property>
<property><name>yarn.nodemanager.process-kill-wait.ms</name><value>2000</value><source>yarn-default.xml</source></property>
<property><name>yarn.timeline-service.state-store-class</name><value>org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore</value><source>programatically</source></property>
<property><name>yarn.nodemanager.container.stderr.tail.bytes</name><value>4096</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.safemode.min.datanodes</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>yarn.timeline-service.client.fd-clean-interval-secs</name><value>60</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.speculative.minimum-allowed-tasks</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.write.stale.datanode.ratio</name><value>1.0f</value><source>programatically</source></property>
<property><name>hadoop.jetty.logs.serve.aliases</name><value>true</value><source>core-default.xml</source></property>
<property><name>oozie.sqoop.args.size</name><value>13</value><source>programatically</source></property>
<property><name>yarn.resourcemanager.webapp.proxyuser.hcat.groups</name><value>*</value><source>programatically</source></property>
<property><name>mapreduce.reduce.shuffle.fetch.retry.timeout-ms</name><value>30000</value><source>programatically</source></property>
<property><name>fs.du.interval</name><value>600000</value><source>core-default.xml</source></property>
<property><name>yarn.resourcemanager.webapp.proxyuser.hcat.hosts</name><value>*</value><source>programatically</source></property>
<property><name>mapreduce.tasktracker.dns.nameserver</name><value>default</value><source>mapred-default.xml</source></property>
<property><name>yarn.sharedcache.admin.address</name><value>0.0.0.0:8047</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.admin.reduce.child.java.opts</name><value>-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.6.5.0-292</value><source>programatically</source></property>
<property><name>hadoop.custom-extensions.root</name><value>/hdp/ext/2.6/hadoop</value><source>programatically</source></property>
<property><name>mapred.job.reduce.memory.mb</name><value>250</value><source>programatically</source></property>
<property><name>hadoop.security.random.device.file.path</name><value>/dev/urandom</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.merge.progress.records</name><value>10000</value><source>mapred-default.xml</source></property>
<property><name>dfs.webhdfs.enabled</name><value>true</value><source>programatically</source></property>
<property><name>hadoop.registry.secure</name><value>false</value><source>core-default.xml</source></property>
<property><name>hadoop.ssl.client.conf</name><value>ssl-client.xml</value><source>core-default.xml</source></property>
<property><name>mapreduce.job.counters.max</name><value>130</value><source>programatically</source></property>
<property><name>yarn.nodemanager.localizer.fetch.thread-count</name><value>4</value><source>yarn-default.xml</source></property>
<property><name>io.mapfile.bloom.size</name><value>1048576</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.localizer.client.thread-count</name><value>5</value><source>yarn-default.xml</source></property>
<property><name>fs.automatic.close</name><value>true</value><source>core-default.xml</source></property>
<property><name>mapreduce.task.profile</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.recovery.compaction-interval-secs</name><value>3600</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.edit.log.autoroll.multiplier.threshold</name><value>2.0</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.task.combine.progress.records</name><value>10000</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.shuffle.ssl.file.buffer.size</name><value>65536</value><source>mapred-default.xml</source></property>
<property><name>yarn.app.mapreduce.client.job.max-retries</name><value>30</value><source>programatically</source></property>
<property><name>fs.swift.impl</name><value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.container.log.backups</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction</name><value>0.75f</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.backup.address</name><value>0.0.0.0:50100</value><source>hdfs-default.xml</source></property>
<property><name>dfs.client.https.need-auth</name><value>false</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.app-submission.cross-platform</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.job.name</name><value>oozie:action:T=sqoop:W=old-movies:A=sqoop-node:ID=0000000-180823180623663-oozie-oozi-W</value><source>because mapred.job.name is deprecated</source></property>
<property><name>yarn.timeline-service.ttl-enable</name><value>true</value><source>programatically</source></property>
<property><name>hadoop.security.group.mapping.ldap.conversion.rule</name><value>none</value><source>core-default.xml</source></property>
<property><name>dfs.user.home.dir.prefix</name><value>/user</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.keytab</name><value>/etc/krb5.keytab</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.fileoutputcommitter.marksuccessfuljobs</name><value>true</value><source>programatically</source></property>
<property><name>fs.azure.authorization.caching.enable</name><value>true</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.xattrs.enabled</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.admin-command-opts</name><value>-Dhdp.version=2.6.5.0-292</value><source>programatically</source></property>
<property><name>nfs.file.dump.dir</name><value>/tmp/.hdfs-nfs</value><source>programatically</source></property>
<property><name>dfs.client.write.exclude.nodes.cache.expiry.interval.millis</name><value>600000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.datanode.fileio.profiling.sampling.percentage</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>yarn.sharedcache.client-server.address</name><value>0.0.0.0:8045</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.jobtracker.restart.recover</name><value>false</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.map.skip.proc.count.autoincr</name><value>true</value><source>mapred-default.xml</source></property>
<property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>true</value><source>hdfs-default.xml</source></property>
<property><name>dfs.image.transfer.chunksize</name><value>65536</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.webapp.cross-origin.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.runtime.linux.docker.privileged-containers.allowed</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>hadoop.security.instrumentation.requires.admin</name><value>false</value><source>core-default.xml</source></property>
<property><name>io.compression.codec.bzip2.library</name><value>system-native</value><source>core-default.xml</source></property>
<property><name>yarn.nodemanager.webapp.rest-csrf.methods-to-ignore</name><value>GET,OPTIONS,HEAD</value><source>yarn-default.xml</source></property>
<property><name>dfs.namenode.name.dir.restore</name><value>true</value><source>programatically</source></property>
<property><name>dfs.datanode.outliers.report.interval</name><value>1800000</value><source>hdfs-default.xml</source></property>
<property><name>dfs.namenode.resource.checked.volumes.minimum</name><value>1</value><source>hdfs-default.xml</source></property>
<property><name>hadoop.ssl.keystores.factory.class</name><value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value><source>core-default.xml</source></property>
<property><name>dfs.namenode.list.cache.directives.num.responses</name><value>100</value><source>hdfs-default.xml</source></property>
<property><name>fs.ftp.host</name><value>0.0.0.0</value><source>core-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size</name><value>10</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.log-aggregation.debug-enabled</name><value>false</value><source>programatically</source></property>
<property><name>s3.blocksize</name><value>67108864</value><source>core-default.xml</source></property>
<property><name>s3native.stream-buffer-size</name><value>4096</value><source>core-default.xml</source></property>
<property><name>mapreduce.jobtracker.taskscheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.dns.nameserver</name><value>default</value><source>hdfs-default.xml</source></property>
<property><name>yarn.nodemanager.resource.memory-mb</name><value>3000</value><source>programatically</source></property>
<property><name>yarn.log.server.web-service.url</name><value>http://sandbox-hdp.hortonworks.com:8188/ws/v1/applicationhistory</value><source>programatically</source></property>
<property><name>mapreduce.task.userlog.limit.kb</name><value>0</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.crypto.codec.classes.aes.ctr.nopadding</name><value>org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec,org.apache.hadoop.crypto.JceAesCtrCryptoCodec</value><source>core-default.xml</source></property>
<property><name>mapreduce.reduce.speculative</name><value>false</value><source>programatically</source></property>
<property><name>yarn.nodemanager.container-monitor.interval-ms</name><value>3000</value><source>programatically</source></property>
<property><name>yarn.node-labels.fs-store.impl.class</name><value>org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore</value><source>yarn-default.xml</source></property>
<property><name>net.topology.script.file.name</name><value>/etc/hadoop/conf/topology_script.py</value><source>programatically</source></property>
<property><name>yarn.nodemanager.kill-escape.launch-command-line</name><value>slider-agent,LLAP</value><source>programatically</source></property>
<property><name>dfs.replication.max</name><value>50</value><source>programatically</source></property>
<property><name>dfs.replication</name><value>1</value><source>programatically</source></property>
<property><name>yarn.client.failover-retries</name><value>0</value><source>yarn-default.xml</source></property>
<property><name>yarn.nodemanager.resource.cpu-vcores</name><value>8</value><source>programatically</source></property>
<property><name>mapreduce.jobhistory.recovery.enable</name><value>true</value><source>programatically</source></property>
<property><name>mapreduce.job.classpath.files</name><value>hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/snappy-java-1.1.1.3.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/aws-java-sdk-core-1.10.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/avro-1.8.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/aws-java-sdk-kms-1.10.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/sqoop-1.4.6.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/azure-storage-5.4.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/jsr305-3.0.2.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/commons-io-2.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/jackson-annotations-2.4.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/jackson-core-2.4.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/mysql-connector-java.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/aws-java-sdk-s3-1.10.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/hadoop-azure-2.7.3.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/azure-keyvault-core-0.8.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/jackson-databind-2.4.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/okhttp-2.7.5.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/gcs-connector-1.8.1.2.6.5.0-292-shaded.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/hadoop-aws-2.7.3.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/guava-11.0.2.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/okio-1.6.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/xz-1.5.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/commons-compress-1.8.1.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/paranamer-2.7.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/azure-data-lake-store-sdk-2.2.5.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/hsqldb-1.8.0.7.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/commons-lang3-3.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/joda-time-2.9.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/sqoop/oozie-sharelib-sqoop-4.2.0.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/aws-java-sdk-core-1.10.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/aws-java-sdk-kms-1.10.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/aws-java-sdk-s3-1.10.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/azure-data-lake-store-sdk-2.2.5.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/azure-keyvault-core-0.8.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/azure-storage-5.4.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/commons-lang3-3.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/gcs-connector-1.8.1.2.6.5.0-292-shaded.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/guava-11.0.2.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/hadoop-aws-2.7.3.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/hadoop-azure-2.7.3.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/hadoop-azure-datalake-2.7.3.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/jackson-annotations-2.4.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/jackson-core-2.4.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/jackson-databind-2.4.4.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/joda-time-2.9.6.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/json-simple-1.1.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/jsr305-3.0.2.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/okhttp-2.7.5.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/okio-1.6.0.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/oozie-hadoop-utils-hadoop-2-4.2.0.2.6.5.0-292.jar,hdfs://sandbox-hdp.hortonworks.com:8020/user/oozie/share/lib/lib_20180618160835/oozie/oozie-sharelib-oozie-4.2.0.2.6.5.0-292.jar</value><source>programatically</source></property>
<property><name>nfs.exports.allowed.hosts</name><value>* rw</value><source>programatically</source></property>
<property><name>yarn.sharedcache.checksum.algo.impl</name><value>org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.reduce.shuffle.memory.limit.percent</name><value>0.25</value><source>mapred-default.xml</source></property>
<property><name>file.replication</name><value>1</value><source>core-default.xml</source></property>
<property><name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name><value>org.apache.hadoop.mapreduce.task.reduce.Shuffle</value><source>mapred-default.xml</source></property>
<property><name>yarn.app.mapreduce.am.log.level</name><value>INFO</value><source>programatically</source></property>
<property><name>yarn.nodemanager.webapp.rest-csrf.enabled</name><value>false</value><source>yarn-default.xml</source></property>
<property><name>mapreduce.job.jvm.numtasks</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>dfs.datanode.fsdatasetcache.max.threads.per.volume</name><value>4</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.am.max-attempts</name><value>2</value><source>programatically</source></property>
<property><name>mapreduce.shuffle.connection-keep-alive.timeout</name><value>5</value><source>mapred-default.xml</source></property>
<property><name>yarn.timeline-service.plugin.enabled</name><value>true</value><source>programatically</source></property>
<property><name>hadoop.fuse.timer.period</name><value>5</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.job.reduces</name><value>1</value><source>mapred-default.xml</source></property>
<property><name>hadoop.security.group.mapping.ldap.connection.timeout.ms</name><value>60000</value><source>core-default.xml</source></property>
<property><name>job.end.notification.url</name><value>http://sandbox-hdp.hortonworks.com:11000/oozie/callback?id=0000000-180823180623663-oozie-oozi-W@sqoop-node&status=$jobStatus</value></property>
<property><name>yarn.app.mapreduce.am.job.task.listener.thread-count</name><value>30</value><source>mapred-default.xml</source></property>
<property><name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value><source>programatically</source></property>
<property><name>dfs.client.retry.policy.enabled</name><value>false</value><source>programatically</source></property>
<property><name>s3native.replication</name><value>3</value><source>core-default.xml</source></property>
<property><name>mapreduce.tasktracker.reduce.tasks.maximum</name><value>2</value><source>mapred-default.xml</source></property>
<property><name>fs.permissions.umask-mode</name><value>022</value><source>programatically</source></property>
<property><name>mapreduce.cluster.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value><source>mapred-default.xml</source></property>
<property><name>mapreduce.client.output.filter</name><value>FAILED</value><source>mapred-default.xml</source></property>
<property><name>yarn.nodemanager.pmem-check-enabled</name><value>false</value><source>programatically</source></property>
<property><name>dfs.client.failover.connection.retries.on.timeouts</name><value>0</value><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobtracker.instrumentation</name><value>org.apache.hadoop.mapred.JobTrackerMetricsInst</value><source>mapred-default.xml</source></property>
<property><name>ftp.replication</name><value>3</value><source>core-default.xml</source></property>
<property><name>yarn.timeline-service.webapp.rest-csrf.methods-to-ignore</name><value>GET,OPTIONS,HEAD</va
... View more
Labels:
- Labels:
-
Apache Oozie
08-26-2018
08:47 PM
How do I enable the web console in oozie?
... View more
Labels:
- Labels:
-
Apache Oozie
08-26-2018
08:15 PM
Where can I find the oozie-site.xml in ambari UI ?
... View more
08-17-2018
12:09 AM
Hive works now, although Flume, Ranger and Zeppelin Notebook don't start. Could you help me with this? Thanks for your help!
... View more
08-16-2018
11:29 PM
Thanks for this. The only config changes I made were resetting the password for MySQL using the best answer here:https://community.hortonworks.com/questions/203206/mysql-default-password-first-time-sandbox-login.html. However, I used the command: update user set authentication_string=PASSWORD(“hadoop”) where User='root'; to update the password instead of the given command there.
... View more
08-16-2018
11:10 PM
I am new to this, so any help is much appreciated. PFA error logs. stderr: /var/lib/ambari-agent/data/errors-1423.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 203, in <module>
HiveMetastore().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 978, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 56, in start
create_metastore_schema()
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 417, in create_metastore_schema
user = params.hive_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName root -passWord [PROTECTED] -verbose' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hive2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.5.0-292/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://sandbox-hdp.hortonworks.com/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: root
Thu Aug 16 22:25:41 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLException : Access denied for user 'root'@'sandbox-hdp.hortonworks.com' (using password: YES)
SQL Error code: 1045
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:80)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:133)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:187)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:291)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:277)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:526)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.sql.SQLException: Access denied for user 'root'@'sandbox-hdp.hortonworks.com' (using password: YES)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3973)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3909)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:873)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1710)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1226)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2188)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:776)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:76)
... 11 more
*** schemaTool failed *** stdout: /var/lib/ambari-agent/data/output-1423.txt 2018-08-16 22:25:32,158 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-08-16 22:25:32,173 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-08-16 22:25:32,361 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-08-16 22:25:32,365 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-08-16 22:25:32,366 - Group['livy'] {}
2018-08-16 22:25:32,368 - Group['spark'] {}
2018-08-16 22:25:32,369 - Group['ranger'] {}
2018-08-16 22:25:32,369 - Group['hdfs'] {}
2018-08-16 22:25:32,370 - Group['zeppelin'] {}
2018-08-16 22:25:32,371 - Group['hadoop'] {}
2018-08-16 22:25:32,372 - Group['users'] {}
2018-08-16 22:25:32,372 - Group['knox'] {}
2018-08-16 22:25:32,372 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,373 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,375 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,376 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,377 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,378 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-16 22:25:32,379 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,380 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-16 22:25:32,380 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger'], 'uid': None}
2018-08-16 22:25:32,381 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-16 22:25:32,382 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2018-08-16 22:25:32,383 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,384 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,385 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,387 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-08-16 22:25:32,388 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,390 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,392 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-08-16 22:25:32,394 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,395 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,396 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,396 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,397 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,398 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-08-16 22:25:32,399 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-16 22:25:32,401 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-08-16 22:25:32,408 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-08-16 22:25:32,408 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-08-16 22:25:32,410 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-16 22:25:32,412 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-08-16 22:25:32,414 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-08-16 22:25:32,424 - call returned (0, '1014')
2018-08-16 22:25:32,424 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1014'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-08-16 22:25:32,430 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1014'] due to not_if
2018-08-16 22:25:32,430 - Group['hdfs'] {}
2018-08-16 22:25:32,431 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-08-16 22:25:32,432 - FS Type:
2018-08-16 22:25:32,432 - Directory['/etc/hadoop'] {'mode': 0755}
2018-08-16 22:25:32,446 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-08-16 22:25:32,447 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-08-16 22:25:32,467 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2018-08-16 22:25:32,474 - Skipping Execute[('setenforce', '0')] due to not_if
2018-08-16 22:25:32,475 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2018-08-16 22:25:32,477 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2018-08-16 22:25:32,478 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2018-08-16 22:25:32,482 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2018-08-16 22:25:32,484 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2018-08-16 22:25:32,492 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:32,500 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-08-16 22:25:32,501 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2018-08-16 22:25:32,502 - File['/usr/hdp/2.6.5.0-292/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2018-08-16 22:25:32,508 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:32,512 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2018-08-16 22:25:32,882 - MariaDB RedHat Support: false
2018-08-16 22:25:32,885 - Using hadoop conf dir: /usr/hdp/2.6.5.0-292/hadoop/conf
2018-08-16 22:25:32,917 - call['ambari-python-wrap /usr/bin/hdp-select status hive-server2'] {'timeout': 20}
2018-08-16 22:25:32,940 - call returned (0, 'hive-server2 - 2.6.5.0-292')
2018-08-16 22:25:32,941 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-08-16 22:25:32,966 - File['/var/lib/ambari-agent/cred/lib/CredentialUtil.jar'] {'content': DownloadSource('http://sandbox-hdp.hortonworks.com:8080/resources/CredentialUtil.jar'), 'mode': 0755}
2018-08-16 22:25:32,968 - Not downloading the file from http://sandbox-hdp.hortonworks.com:8080/resources/CredentialUtil.jar, because /var/lib/ambari-agent/tmp/CredentialUtil.jar already exists
2018-08-16 22:25:32,968 - checked_call[('/usr/lib/jvm/java/bin/java', '-cp', u'/var/lib/ambari-agent/cred/lib/*', 'org.apache.ambari.server.credentialapi.CredentialUtil', 'get', 'javax.jdo.option.ConnectionPassword', '-provider', u'jceks://file/var/lib/ambari-agent/cred/conf/hive_metastore/hive-site.jceks')] {}
2018-08-16 22:25:33,613 - checked_call returned (0, 'SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".\nSLF4J: Defaulting to no-operation (NOP) logger implementation\nSLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.\nAug 16, 2018 10:25:33 PM org.apache.hadoop.util.NativeCodeLoader <clinit>\nWARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\nhortonworks1')
2018-08-16 22:25:33,625 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmpZwhGjW 2>/tmp/tmpyGZY3f''] {'quiet': False}
2018-08-16 22:25:33,676 - call returned (1, '')
2018-08-16 22:25:33,676 - Execution of 'cat /var/run/hive/hive.pid 1>/tmp/tmpZwhGjW 2>/tmp/tmpyGZY3f' returned 1. cat: /var/run/hive/hive.pid: No such file or directory
2018-08-16 22:25:33,677 - Execute['ambari-sudo.sh kill '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1)'}
2018-08-16 22:25:33,683 - Skipping Execute['ambari-sudo.sh kill '] due to not_if
2018-08-16 22:25:33,684 - Execute['ambari-sudo.sh kill -9 '] {'not_if': '! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1) || ( sleep 5 && ! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1) )', 'ignore_failures': True}
2018-08-16 22:25:33,689 - Skipping Execute['ambari-sudo.sh kill -9 '] due to not_if
2018-08-16 22:25:33,690 - Execute['! (ls /var/run/hive/hive.pid >/dev/null 2>&1 && ps -p >/dev/null 2>&1)'] {'tries': 20, 'try_sleep': 3}
2018-08-16 22:25:33,695 - File['/var/run/hive/hive.pid'] {'action': ['delete']}
2018-08-16 22:25:33,696 - Pid file /var/run/hive/hive.pid is empty or does not exist
2018-08-16 22:25:33,700 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.0-292 -> 2.6.5.0-292
2018-08-16 22:25:33,703 - Directory['/etc/hive'] {'mode': 0755}
2018-08-16 22:25:33,703 - Directories to fill with configs: [u'/usr/hdp/current/hive-metastore/conf', u'/usr/hdp/current/hive-metastore/conf/conf.server']
2018-08-16 22:25:33,704 - Directory['/etc/hive/2.6.5.0-292/0'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2018-08-16 22:25:33,705 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/2.6.5.0-292/0', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2018-08-16 22:25:33,714 - Generating config: /etc/hive/2.6.5.0-292/0/mapred-site.xml
2018-08-16 22:25:33,714 - File['/etc/hive/2.6.5.0-292/0/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-08-16 22:25:33,764 - File['/etc/hive/2.6.5.0-292/0/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:33,765 - File['/etc/hive/2.6.5.0-292/0/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:33,767 - File['/etc/hive/2.6.5.0-292/0/hive-exec-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:33,770 - File['/etc/hive/2.6.5.0-292/0/hive-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:33,784 - File['/etc/hive/2.6.5.0-292/0/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:33,785 - Directory['/etc/hive/2.6.5.0-292/0/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'create_parents': True, 'mode': 0700}
2018-08-16 22:25:33,786 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/etc/hive/2.6.5.0-292/0/conf.server', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2018-08-16 22:25:33,804 - Generating config: /etc/hive/2.6.5.0-292/0/conf.server/mapred-site.xml
2018-08-16 22:25:33,805 - File['/etc/hive/2.6.5.0-292/0/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2018-08-16 22:25:33,868 - File['/etc/hive/2.6.5.0-292/0/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:33,869 - File['/etc/hive/2.6.5.0-292/0/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:33,872 - File['/etc/hive/2.6.5.0-292/0/conf.server/hive-exec-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:33,875 - File['/etc/hive/2.6.5.0-292/0/conf.server/hive-log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:33,876 - File['/etc/hive/2.6.5.0-292/0/conf.server/parquet-logging.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:33,877 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.jceks'] {'content': StaticFile('/var/lib/ambari-agent/cred/conf/hive_metastore/hive-site.jceks'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0640}
2018-08-16 22:25:33,878 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.jceks'] because contents don't match
2018-08-16 22:25:33,879 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0600, 'configuration_attributes': {u'hidden': {u'javax.jdo.option.ConnectionPassword': u'HIVE_CLIENT,WEBHCAT_SERVER,HCAT,CONFIG_DOWNLOAD'}}, 'owner': 'hive', 'configurations': ...}
2018-08-16 22:25:33,890 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2018-08-16 22:25:33,890 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2018-08-16 22:25:34,044 - Generating Atlas Hook config file /usr/hdp/current/hive-metastore/conf/conf.server/atlas-application.properties
2018-08-16 22:25:34,044 - PropertiesFile['/usr/hdp/current/hive-metastore/conf/conf.server/atlas-application.properties'] {'owner': 'hive', 'group': 'hadoop', 'mode': 0644, 'properties': ...}
2018-08-16 22:25:34,047 - Generating properties file: /usr/hdp/current/hive-metastore/conf/conf.server/atlas-application.properties
2018-08-16 22:25:34,048 - File['/usr/hdp/current/hive-metastore/conf/conf.server/atlas-application.properties'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644}
2018-08-16 22:25:34,064 - Writing File['/usr/hdp/current/hive-metastore/conf/conf.server/atlas-application.properties'] because contents don't match
2018-08-16 22:25:34,065 - XmlConfig['hivemetastore-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0600, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': {u'hive.service.metrics.hadoop2.component': u'hivemetastore', u'hive.metastore.metrics.enabled': u'true', u'hive.service.metrics.file.location': u'/var/log/hive/hivemetastore-report.json', u'hive.service.metrics.reporter': u'JSON_FILE, JMX, HADOOP2'}}
2018-08-16 22:25:34,073 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hivemetastore-site.xml
2018-08-16 22:25:34,073 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hivemetastore-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0600, 'encoding': 'UTF-8'}
2018-08-16 22:25:34,082 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:34,083 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2018-08-16 22:25:34,086 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2018-08-16 22:25:34,087 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://sandbox-hdp.hortonworks.com:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2018-08-16 22:25:34,088 - Not downloading the file from http://sandbox-hdp.hortonworks.com:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2018-08-16 22:25:34,091 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hadoop-metrics2-hivemetastore.properties'] {'content': Template('hadoop-metrics2-hivemetastore.properties.j2'), 'owner': 'hive', 'group': 'hadoop', 'mode': 0600}
2018-08-16 22:25:34,092 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2018-08-16 22:25:34,093 - Directory['/tmp/hive'] {'owner': 'hive', 'create_parents': True, 'mode': 0777}
2018-08-16 22:25:34,094 - Directory['/var/run/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2018-08-16 22:25:34,095 - Directory['/var/log/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2018-08-16 22:25:34,095 - Directory['/var/lib/hive'] {'owner': 'hive', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2018-08-16 22:25:34,097 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -initSchema -dbType mysql -userName root -passWord [PROTECTED] -verbose'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-server2-hive2/bin/schematool -info -dbType mysql -userName root -passWord [PROTECTED] -verbose'", 'user': 'hive'}
Command failed after 1 tries
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
08-16-2018
10:38 PM
My hive services wont start after running these commands.
... View more