Member since
04-02-2019
36
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9708 | 05-21-2019 10:54 PM | |
13222 | 05-15-2019 10:50 PM |
06-22-2019
02:05 AM
I am also searching for it on the internet. I've found a couple of people who are having the same, trying to follow there solution but unfortunately didn't work ... one of them is: http://morecoder.com/article/1097655.html I have tried many things and changed a couple of configurations, so I am not sure if we are on the same page or no... anyways this is the stderr I am getting now: Log Type: stderr
Log Upload Time: Sat Jun 22 11:57:38 +0400 2019
Log Length: 937
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/159/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/23/3.0.0-cdh6.2.0-mr-framework.tar.gz/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
Try --help for usage instructions.
... View more
06-20-2019
12:41 AM
ok ... this is another try on a different table: <sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>masternode:8032</job-tracker>
<name-node>hdfs://NameServiceOne</name-node>
<command>import \
--connect 'jdbc:sqlserver://11.11.11.11;database=SQL_Training' \
--username SQL_Training_user --password SQL_Training_user \
--table BigDataTest -m 1 --check-column lastmodified \
--merge-key id \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db_atlas_jrtf.db/BigDataTest \
--hive-table BigDataTest \
--map-column-hive lastmodified=timestamp \
--fields-terminated-by '\001' --fields-terminated-by '\n'</command>
<configuration />
</sqoop> but same error!
... View more
06-19-2019
11:31 PM
I hope this is what you want for the workflow: <workflow-app name="Batch job for query-sqoop1" xmlns="uri:oozie:workflow:0.5">
<start to="sqoop-fde5"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="sqoop-fde5">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import \
--connect 'jdbc:sqlserver://11.11.11.11;database=DBXYZ' \
--username theUser --password thePassword \
--table category -m 1 --check-column LastEditOn \
--merge-key 'Reference ID' \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db_atlas_jrtf.db/category \
--hive-table category \
--map-column-hive LastEditOn=timestamp,CreatedOn=timestamp \
--fields-terminated-by '\001' --fields-terminated-by '\n'</command>
</sqoop>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app> For the job configuration, I am really not sure where to find it. The one I can reach is a lot of scrolls that can't be taken screenshot in anyway.... so would you please give me the path to the job configuration ?
... View more
06-19-2019
01:03 AM
I am sorry but there is nothing wrong with the syntax, as if I run it on the terminal it completes successfully. I have a doubt regarding security, and that is because of the following line in the log aused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57) I would like to note also that this is the first try to run sqoop script on Hue after a fresh installation of cdh 6.2... so I am afraid there is something that I've missed in the configuration, but I really can't find it 😞
... View more
06-16-2019
01:56 AM
That is a very long log.... This forum does not allow more than 50K char.s .... The following is the last 50K char.s of the log generated: dfs.namenode.checkpoint.dir : file://${hadoop.tmp.dir}/dfs/namesecondary
dfs.webhdfs.rest-csrf.browser-useragents-regex : ^Mozilla.*,^Opera.*
dfs.namenode.top.windows.minutes : 1,5,25
dfs.client.use.legacy.blockreader.local : false
mapreduce.job.maxtaskfailures.per.tracker : 3
mapreduce.shuffle.max.connections : 0
net.topology.node.switch.mapping.impl : org.apache.hadoop.net.ScriptBasedMapping
hadoop.kerberos.keytab.login.autorenewal.enabled : false
yarn.client.application-client-protocol.poll-interval-ms : 200
mapreduce.fileoutputcommitter.marksuccessfuljobs : true
yarn.nodemanager.localizer.address : ${yarn.nodemanager.hostname}:8040
dfs.namenode.list.cache.pools.num.responses : 100
nfs.server.port : 2049
dfs.namenode.https-address.NameServiceOne.namenode417 : masternode:9871
hadoop.proxyuser.HTTP.hosts : *
dfs.checksum.type : CRC32C
fs.s3a.readahead.range : 64K
dfs.client.read.short.circuit.replica.stale.threshold.ms : 1800000
dfs.ha.namenodes.NameServiceOne : namenode417,namenode434
ha.zookeeper.parent-znode : /hadoop-ha
yarn.sharedcache.admin.thread-count : 1
yarn.nodemanager.resource.cpu-vcores : -1
mapreduce.jobhistory.http.policy : HTTP_ONLY
fs.s3a.attempts.maximum : 20
dfs.datanode.lazywriter.interval.sec : 60
yarn.log-aggregation.retain-check-interval-seconds : -1
yarn.resourcemanager.node-ip-cache.expiry-interval-secs : -1
yarn.timeline-service.client.fd-clean-interval-secs : 60
fs.wasbs.impl : org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure
dfs.federation.router.reader.count : 1
hadoop.ssl.keystores.factory.class : org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.zk.num-retries : 1000
mapreduce.job.split.metainfo.maxsize : 10000000
hadoop.security.random.device.file.path : /dev/urandom
yarn.client.nodemanager-connect.max-wait-ms : 180000
yarn.app.mapreduce.client-am.ipc.max-retries : 3
dfs.namenode.snapshotdiff.allow.snap-root-descendant : true
yarn.nodemanager.container-diagnostics-maximum-size : 10000
yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage : false
dfs.namenode.ec.system.default.policy : RS-6-3-1024k
dfs.replication.max : 512
dfs.datanode.https.address : 0.0.0.0:9865
dfs.ha.standby.checkpoints : true
ipc.client.kill.max : 10
mapreduce.job.committer.setup.cleanup.needed : true
dfs.client.domain.socket.data.traffic : false
yarn.nodemanager.localizer.cache.target-size-mb : 10240
yarn.resourcemanager.admin.client.thread-count : 1
hadoop.security.group.mapping.ldap.connection.timeout.ms : 60000
yarn.timeline-service.store-class : org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
yarn.resourcemanager.nm-container-queuing.queue-limit-stdev : 1.0f
yarn.resourcemanager.zk-appid-node.split-index : 0
hadoop.tmp.dir : /tmp/hadoop-${user.name}
dfs.domain.socket.disable.interval.seconds : 1
fs.s3a.etag.checksum.enabled : false
hadoop.security.kms.client.failover.sleep.base.millis : 100
yarn.node-labels.configuration-type : centralized
fs.s3a.retry.interval : 500ms
dfs.datanode.http.internal-proxy.port : 0
yarn.timeline-service.ttl-ms : 604800000
mapreduce.task.exit.timeout.check-interval-ms : 20000
oozie.sqoop.args.7 : \
--table
oozie.sqoop.args.8 : category
mapreduce.map.speculative : false
oozie.sqoop.args.5 : --password
oozie.sqoop.args.6 : myUsername
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms : 1000
yarn.timeline-service.recovery.enabled : false
oozie.sqoop.args.9 : -m
yarn.nodemanager.recovery.dir : ${hadoop.tmp.dir}/yarn-nm-recovery
mapreduce.job.counters.max : 120
dfs.namenode.name.cache.threshold : 10
oozie.sqoop.args.0 : import
dfs.namenode.caching.enabled : true
dfs.namenode.max.full.block.report.leases : 6
oozie.sqoop.args.3 : \
--username
yarn.nodemanager.linux-container-executor.cgroups.delete-delay-ms : 20
dfs.namenode.max.extra.edits.segments.retained : 10000
oozie.sqoop.args.4 : myUsername
dfs.webhdfs.user.provider.user.pattern : ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
yarn.webapp.ui2.enable : false
oozie.sqoop.args.1 : \
--connect
oozie.sqoop.args.2 : 'jdbc:sqlserver://myServer;database=myDB'
dfs.client.mmap.enabled : true
mapreduce.map.log.level : INFO
dfs.datanode.ec.reconstruction.threads : 8
hadoop.fuse.timer.period : 5
yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms : 1000
hadoop.zk.timeout-ms : 10000
ha.health-monitor.check-interval.ms : 1000
dfs.client.hedged.read.threshold.millis : 500
yarn.resourcemanager.fs.state-store.retry-interval-ms : 1000
mapreduce.output.fileoutputformat.compress : false
yarn.sharedcache.store.in-memory.staleness-period-mins : 10080
dfs.client.write.byte-array-manager.count-limit : 2048
mapreduce.application.framework.path : hdfs://NameServiceOne//user/yarn/mapreduce/mr-framework/3.0.0-cdh6.2.0-mr-framework.tar.gz#mr-framework
hadoop.security.group.mapping.providers.combined : true
fs.AbstractFileSystem.har.impl : org.apache.hadoop.fs.HarFs
mapreduce.job.running.map.limit : 0
yarn.nodemanager.webapp.address : ${yarn.nodemanager.hostname}:8042
mapreduce.reduce.input.buffer.percent : 0.0
mapreduce.job.cache.files : hdfs://NameServiceOne/user/hue/oozie/deployments/_admin_-oozie-312-1560674439.04/lib/hive-site.xml#hive-site.xml,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-exec-core.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-security-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/slider-core-0.90.2-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/aopalliance-repackaged-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jcodings-1.0.18.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/fst-2.50.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ST4-4.0.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-shaded-protobuf.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims-0.23.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ecj-4.4.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/libthrift-0.9.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-crypto-1.0.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-server-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-schemas-3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/transaction-api-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-webapp-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/joda-time-2.9.9.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/metrics-core-3.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-hcatalog-core.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-tez.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/asm-commons-6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-container-servlet-core-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/HikariCP-java7-2.4.12.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-data-core.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hk2-utils-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-data-hive.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/stringtemplate-3.2.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/calcite-core-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/bonecp-0.8.0.RELEASE.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-core-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/htrace-core4-4.1.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/tephra-api-0.6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/curator-client-2.7.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-encoding.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avro-ipc.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-cli.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/fastutil-7.2.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/guava-11.0.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-api-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-avro.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/oro-2.0.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-lang-2.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.servlet.jsp-api-2.3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/asm-tree-6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avro.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/bcpkix-jdk15on-1.60.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/snappy-0.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-pool-1.5.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jpam-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-servlet-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-client.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/osgi-resource-locator-1.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-annotations-2.9.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/httpcore-4.4.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-archives.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/slf4j-api-1.7.25.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/tephra-hbase-compat-1.0-0.6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hsqldb-1.8.0.10.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-data-mapreduce.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-resourcemanager.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-jaas-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/calcite-linq4j-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-codec-1.9.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/gson-2.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-protocol-shaded.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-hadoop.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hk2-api-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jta-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-dbcp-1.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-common-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/aggdesigner-algorithm-6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/metrics-json-3.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/opencsv-2.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.inject-1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-column.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-core-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.ws.rs-api-2.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hk2-locator-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javolution-5.5.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/zookeeper.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-web-proxy.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-ant.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/groovy-all-2.4.11.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-hadoop-bundle.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-protocol.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/aopalliance-1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-http-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-api-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/mssql-jdbc-6.2.1.jre7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-common-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/joni-2.1.11.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/antlr-2.7.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/apache-jstl-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-zookeeper.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-jndi-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/libfb303-0.9.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/snappy-java-1.1.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.el-3.0.1-b11.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-client-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-plus-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-httpclient-3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-shaded-netty.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/re2j-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/json-io-2.5.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javassist-3.20.0-GA.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-media-jaxb-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/apache-curator-2.12.0.pom#apache-curator-2.12.0.pom,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-classification.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-databind-2.9.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-replication.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-storage-api.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.servlet-api-3.1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-client-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.annotation-api-1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/curator-framework-2.7.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims-scheduler.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-server.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-client.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-metrics.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/bcprov-jdk15on-1.60.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-procedure.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-rewrite-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-serde.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/taglibs-standard-spec-1.2.5.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/guice-3.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/calcite-druid-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/sqoop.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.jdo-3.2.0-m3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/datanucleus-core-4.1.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/metrics-jvm-3.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/logredactor-2.0.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-web-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/apache-jsp-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/java-util-1.9.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-metrics-api.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jcommander-1.30.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ant-1.9.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-metastore.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-runner-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/HikariCP-2.6.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-compress-1.9.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/guice-assistedinject-3.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-slf4j-impl-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ehcache-3.3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-hadoop2-compat.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/httpclient-4.5.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-mapreduce.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.inject-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/oozie-sharelib-sqoop.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/objenesis-1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-common-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-server-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/velocity-1.5.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/janino-2.7.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-http.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/stax-api-1.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-lang3-3.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avatica-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/antlr-runtime-3.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-registry.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-server.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/leveldbjni-all-1.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-util-ajax-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jdo-api-3.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jline-2.12.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/taglibs-standard-impl-1.2.5.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-core-asl-1.9.13.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-io-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ant-launcher-1.9.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/audience-annotations-0.5.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-hadoop-compatibility.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-server-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.servlet.jsp-2.3.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/derby-10.14.1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/findbugs-annotations-1.3.9-1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-discovery-core-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/xz-1.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/geronimo-jcache_1.0_spec-1.0-alpha-1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-discovery-api-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-hadoop-compat.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/datanucleus-api-jdo-4.2.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/netty-3.10.6.Final.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-xml-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ivy-2.4.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/paranamer-2.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-compiler-2.7.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-client-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-shaded-miscellaneous.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avro-mapred-hadoop2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-applicationhistoryservice.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-servlet-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/datanucleus-rdbms-4.1.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/tephra-core-0.6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-service-rpc.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jsr305-3.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-mapper-asl-1.9.13-cloudera.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/json-20090211.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/validation-api-1.1.0.Final.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-service.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-1.2-api-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-guava-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-zookeeper-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-core-2.9.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-format.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-1.2.17.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/oozie-sharelib-sqoop-5.1.0-cdh6.2.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-annotations-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-io-2.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-jackson.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-api-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/disruptor-3.3.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-orc.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/oozie/oozie-sharelib-oozie-5.1.0-cdh6.2.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/oozie/oozie-sharelib-oozie.jar
dfs.client.slow.io.warning.threshold.ms : 30000
fs.s3a.multipart.size : 100M
yarn.app.mapreduce.am.job.committer.commit-window : 10000
dfs.qjournal.new-epoch.timeout.ms : 120000
yarn.timeline-service.webapp.rest-csrf.enabled : false
hadoop.proxyuser.flume.hosts : *
dfs.namenode.edits.asynclogging : true
yarn.timeline-service.reader.class : org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl
yarn.app.mapreduce.am.staging-dir.erasurecoding.enabled : false
dfs.blockreport.incremental.intervalMsec : 0
dfs.datanode.network.counts.cache.max.size : 2147483647
dfs.namenode.https-address.NameServiceOne.namenode434 : node3:9871
yarn.timeline-service.writer.class : org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl
mapreduce.ifile.readahead : true
dfs.qjournal.get-journal-state.timeout.ms : 120000
yarn.timeline-service.entity-group-fs-store.summary-store : org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
dfs.client.socketcache.capacity : 16
fs.s3a.s3guard.ddb.table.create : false
dfs.client.retry.policy.spec : 10000,6,60000,10
mapreduce.output.fileoutputformat.compress.codec : org.apache.hadoop.io.compress.DefaultCodec
fs.s3a.socket.recv.buffer : 8192
dfs.datanode.fsdatasetcache.max.threads.per.volume : 4
dfs.namenode.reencrypt.batch.size : 1000
yarn.sharedcache.store.in-memory.initial-delay-mins : 10
mapreduce.jobhistory.webapp.address : masternode:19888
fs.adl.impl : org.apache.hadoop.fs.adl.AdlFileSystem
fs.AbstractFileSystem.gs.impl : com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS
mapreduce.task.userlog.limit.kb : 0
fs.s3a.connection.ssl.enabled : true
yarn.router.rmadmin.interceptor-class.pipeline : org.apache.hadoop.yarn.server.router.rmadmin.DefaultRMAdminRequestInterceptor
yarn.sharedcache.webapp.address : 0.0.0.0:8788
hadoop.fuse.connection.timeout : 300
dfs.http.client.retry.policy.spec : 10000,6,60000,10
yarn.resourcemanager.rm.container-allocation.expiry-interval-ms : 600000
ipc.server.max.connections : 0
yarn.app.mapreduce.am.resource.mb : 3072
hadoop.security.groups.cache.secs : 300
dfs.datanode.peer.stats.enabled : false
dfs.replication : 3
mapreduce.shuffle.transfer.buffer.size : 131072
dfs.namenode.audit.log.async : false
hadoop.security.group.mapping.ldap.directory.search.timeout : 10000
dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold : 10737418240
dfs.datanode.disk.check.timeout : 10m
yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts : 3
fs.s3a.committer.threads : 8
dfs.checksum.combine.mode : MD5MD5CRC
yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs : 3600
yarn.scheduler.maximum-allocation-vcores : 6
yarn.nodemanager.sleep-delay-before-sigkill.ms : 250
fs.AbstractFileSystem.abfs.impl : org.apache.hadoop.fs.azurebfs.Abfs
mapreduce.job.acl-modify-job :
fs.automatic.close : true
fs.azure.sas.expiry.period : 90d
dfs.qjm.operations.timeout : 60s
hadoop.proxyuser.httpfs.hosts : *
dfs.namenode.stale.datanode.minimum.interval : 3
dfs.namenode.reencrypt.edek.threads : 10
dfs.federation.router.store.membership.expiration : 300000
hadoop.security.groups.cache.background.reload.threads : 3
mapreduce.input.fileinputformat.list-status.num-threads : 1
hadoop.security.group.mapping.ldap.posix.attr.gid.name : gidNumber
dfs.namenode.acls.enabled : false
dfs.client.short.circuit.replica.stale.threshold.ms : 1800000
dfs.namenode.resource.du.reserved : 104857600
dfs.federation.router.connection.clean.ms : 10000
dfs.client.server-defaults.validity.period.ms : 3600000
dfs.federation.router.metrics.class : org.apache.hadoop.hdfs.server.federation.metrics.FederationRPCPerformanceMonitor
mapreduce.shuffle.listen.queue.size : 128
mapreduce.jobhistory.intermediate-done-dir : ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate
mapreduce.client.libjars.wildcard : true
dfs.federation.router.cache.ttl : 60000
yarn.nodemanager.recovery.compaction-interval-secs : 3600
dfs.namenode.edits.noeditlogchannelflush : false
mapreduce.reduce.shuffle.input.buffer.percent : 0.70
yarn.http.policy : HTTP_ONLY
mapreduce.map.maxattempts : 4
dfs.namenode.audit.loggers : default
io.serializations : org.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
hadoop.security.groups.cache.warn.after.ms : 5000
dfs.client.write.byte-array-manager.count-reset-time-period-ms : 10000
yarn.nodemanager.webapp.rest-csrf.custom-header : X-XSRF-Header
yarn.app.mapreduce.am.admin.user.env : LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH
dfs.namenode.snapshot.capture.openfiles : true
yarn.node-labels.fs-store.impl.class : org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore
hadoop.http.cross-origin.allowed-methods : GET,POST,HEAD
dfs.qjournal.queued-edits.limit.mb : 10
mapreduce.jobhistory.webapp.rest-csrf.enabled : false
dfs.http.policy : HTTP_ONLY
dfs.balancer.max-size-to-move : 10737418240
dfs.datanode.sync.behind.writes.in.background : false
hadoop.zk.acl : world:anyone:rwcda
yarn.nodemanager.container.stderr.pattern : {*stderr*,*STDERR*}
dfs.namenode.reencrypt.throttle.limit.updater.ratio : 1.0
mapreduce.cluster.local.dir : ${hadoop.tmp.dir}/mapred/local
hadoop.kerberos.kinit.command : kinit
dfs.namenode.secondary.https-address : 0.0.0.0:9869
dfs.namenode.metrics.logger.period.seconds : 600
dfs.block.access.token.lifetime : 600
dfs.ha.automatic-failover.enabled.NameServiceOne : true
dfs.namenode.delegation.token.max-lifetime : 604800000
dfs.datanode.drop.cache.behind.writes : false
dfs.mover.address : 0.0.0.0:0
dfs.block.placement.ec.classname : org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant
dfs.namenode.num.extra.edits.retained : 1000000
ipc.client.connect.max.retries.on.timeouts : 45
fs.viewfs.rename.strategy : SAME_MOUNTPOINT
fs.client.resolve.topology.enabled : false
hadoop.proxyuser.hive.hosts : *
yarn.resourcemanager.node-labels.provider.fetch-interval-ms : 1800000
yarn.nodemanager.container-metrics.enable : true
mapreduce.job.map.output.collector.class : org.apache.hadoop.mapred.MapTask$MapOutputBuffer
fs.s3a.fast.upload.buffer : disk
ha.health-monitor.connect-retry-interval.ms : 1000
dfs.namenode.edekcacheloader.initial.delay.ms : 3000
dfs.edit.log.transfer.bandwidthPerSec : 0
dfs.ha.tail-edits.in-progress : false
dfs.federation.router.heartbeat.interval : 5000
ssl.client.truststore.reload.interval : 10000
dfs.client.datanode-restart.timeout : 30s
io.mapfile.bloom.size : 1048576
hadoop.security.kms.client.authentication.retry-count : 1
dfs.client-write-packet-size : 65536
fs.ftp.data.connection.mode : ACTIVE_LOCAL_DATA_CONNECTION_MODE
fs.swift.impl : org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
yarn.resourcemanager.webapp.rest-csrf.methods-to-ignore : GET,OPTIONS,HEAD
mapreduce.job.max.map : -1
yarn.app.mapreduce.shuffle.log.backups : 0
ftp.blocksize : 67108864
dfs.namenode.kerberos.principal.pattern : *
yarn.resourcemanager.scheduler.monitor.enable : false
dfs.webhdfs.socket.connect-timeout : 60s
dfs.namenode.replication.max-streams : 2
nfs.allow.insecure.ports : true
yarn.sharedcache.nm.uploader.thread-count : 20
dfs.federation.router.admin.enable : true
yarn.app.mapreduce.client.job.retry-interval : 2000
yarn.scheduler.configuration.store.max-logs : 1000
hadoop.security.authorization : false
yarn.timeline-service.version : 1.0f
yarn.am.liveness-monitor.expiry-interval-ms : 600000
fs.har.impl.disable.cache : true
hadoop.proxyuser.hdfs.hosts : *
mapreduce.job.reduce.slowstart.completedmaps : 0.8
yarn.timeline-service.leveldb-timeline-store.path : ${hadoop.tmp.dir}/yarn/timeline
dfs.namenode.upgrade.domain.factor : ${dfs.replication}
mapreduce.jobhistory.minicluster.fixed.ports : false
mapreduce.application.classpath : $HADOOP_CLIENT_CONF_DIR,$PWD/mr-framework/*,$MR2_CLASSPATH
yarn.resourcemanager.delegation.token.max-lifetime : 604800000
yarn.resourcemanager.ha.automatic-failover.enabled : true
mapreduce.reduce.java.opts : -Djava.net.preferIPv4Stack=true
dfs.datanode.socket.write.timeout : 480000
dfs.namenode.accesstime.precision : 3600000
dfs.namenode.redundancy.considerLoad.factor : 2.0
yarn.resourcemanager.store.class : org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
io.mapfile.bloom.error.rate : 0.005
yarn.nodemanager.webapp.rest-csrf.enabled : false
yarn.timeline-service.leveldb-state-store.path : ${hadoop.tmp.dir}/yarn/timeline
hadoop.proxyuser.hive.groups : *
dfs.federation.router.rpc-address : 0.0.0.0:8888
fs.s3a.committer.staging.unique-filenames : true
dfs.namenode.support.allow.format : true
yarn.scheduler.configuration.zk-store.parent-path : /confstore
dfs.content-summary.limit : 5000
yarn.timeline-service.writer.flush-interval-seconds : 60
yarn.nodemanager.container-executor.class : org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
dfs.namenode.posix.acl.inheritance.enabled : true
dfs.datanode.outliers.report.interval : 30m
hadoop.security.kms.client.encrypted.key.cache.low-watermark : 0.3f
dfs.namenode.top.enabled : true
yarn.app.mapreduce.shuffle.log.separate : true
hadoop.user.group.static.mapping.overrides : dr.who=;
dfs.federation.router.http-address : 0.0.0.0:50071
fs.s3a.retry.throttle.interval : 1000ms
yarn.nodemanager.amrmproxy.address : 0.0.0.0:8049
mapreduce.jobhistory.webapp.rest-csrf.custom-header : X-XSRF-Header
yarn.webapp.xfs-filter.enabled : true
dfs.client.cached.conn.retry : 3
dfs.client.key.provider.cache.expiry : 864000000
dfs.namenode.path.based.cache.refresh.interval.ms : 30000
yarn.nodemanager.collector-service.thread-count : 5
dfs.block.replicator.classname : org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
dfs.namenode.fs-limits.max-directory-items : 1048576
dfs.ha.log-roll.period : 120s
yarn.nodemanager.runtime.linux.docker.capabilities : CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE
yarn.nodemanager.distributed-scheduling.enabled : false
ipc.client.fallback-to-simple-auth-allowed : false
yarn.minicluster.fixed.ports : false
yarn.nodemanager.remote-app-log-dir : /tmp/logs
yarn.timeline-service.entity-group-fs-store.scan-interval-seconds : 60
dfs.xframe.enabled : true
yarn.nodemanager.resource.percentage-physical-cpu-limit : 100
mapreduce.job.tags : oozie-564a124254f1fd53cb03553181f7e603
dfs.namenode.fs-limits.max-xattr-size : 16384
dfs.datanode.http.address : 0.0.0.0:9864
dfs.namenode.blocks.per.postponedblocks.rescan : 10000
fs.s3a.s3guard.cli.prune.age : 86400000
dfs.web.authentication.filter : org.apache.hadoop.hdfs.web.AuthFilter
dfs.namenode.maintenance.replication.min : 1
hadoop.jetty.logs.serve.aliases : true
dfs.webhdfs.ugi.expire.after.access : 600000
dfs.namenode.max.op.size : 52428800
mapreduce.jobhistory.admin.acl : *
mapreduce.job.reducer.unconditional-preempt.delay.sec : 300
yarn.app.mapreduce.am.hard-kill-timeout-ms : 10000
yarn.resourcemanager.display.per-user-apps : false
yarn.resourcemanager.node-removal-untracked.timeout-ms : 60000
yarn.resourcemanager.webapp.address : masternode:8088
mapreduce.jobhistory.recovery.enable : false
yarn.sharedcache.store.in-memory.check-period-mins : 720
dfs.client.test.drop.namenode.response.number : 0
fs.df.interval : 60000
fs.s3a.assumed.role.session.duration : 30m
mapreduce.job.cache.limit.max-single-resource-mb : 0
yarn.timeline-service.enabled : false
dfs.disk.balancer.block.tolerance.percent : 10
dfs.webhdfs.netty.high.watermark : 65535
mapreduce.task.profile : false
hadoop.http.cross-origin.allowed-headers : X-Requested-With,Content-Type,Accept,Origin
yarn.router.webapp.address : 0.0.0.0:8089
dfs.datanode.balance.max.concurrent.moves : 50
yarn.nodemanager.hostname : 0.0.0.0
mapreduce.task.exit.timeout : 60000
yarn.resourcemanager.nm-container-queuing.max-queue-length : 15
mapreduce.job.token.tracking.ids.enabled : false
yarn.scheduler.increment-allocation-mb : 512
fs.s3a.assumed.role.credentials.provider : org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
fs.azure.authorization.caching.enable : true
hadoop.security.kms.client.failover.sleep.max.millis : 2000
dfs.client.mmap.retry.timeout.ms : 300000
yarn.resourcemanager.webapp.rest-csrf.custom-header : X-XSRF-Header
yarn.resourcemanager.nm-container-queuing.max-queue-wait-time-ms : 100
mapreduce.jobhistory.move.thread-count : 3
dfs.permissions.enabled : true
fs.AbstractFileSystem.hdfs.impl : org.apache.hadoop.fs.Hdfs
yarn.nodemanager.container-localizer.log.level : INFO
hadoop.http.filter.initializers : org.apache.hadoop.http.lib.StaticUserWebFilter
yarn.timeline-service.http-authentication.simple.anonymous.allowed : true
yarn.nodemanager.runtime.linux.docker.allowed-container-networks : host,none,bridge
dfs.qjournal.accept-recovery.timeout.ms : 120000
yarn.sharedcache.client-server.thread-count : 50
fs.s3a.s3guard.ddb.max.retries : 9
fs.s3a.committer.magic.enabled : false
yarn.resourcemanager.resource-tracker.address : masternode:8031
mapreduce.jobhistory.jobname.limit : 50
dfs.domain.socket.path : /var/run/hdfs-sockets/dn
dfs.namenode.decommission.blocks.per.interval : 500000
dfs.qjournal.write-txns.timeout.ms : 20000
rpc.metrics.quantile.enable : false
yarn.federation.subcluster-resolver.class : org.apache.hadoop.yarn.server.federation.resolver.DefaultSubClusterResolverImpl
dfs.namenode.read-lock-reporting-threshold-ms : 5000
mapreduce.task.timeout : 600000
yarn.nodemanager.resource.memory-mb : -1
dfs.datanode.failed.volumes.tolerated : 0
yarn.nodemanager.disk-health-checker.min-healthy-disks : 0.25
mapreduce.framework.name : yarn
mapreduce.fileoutputcommitter.algorithm.version : 2
yarn.router.clientrm.interceptor-class.pipeline : org.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor
yarn.sharedcache.nested-level : 3
fs.s3a.connection.timeout : 200000
hadoop.caller.context.signature.max.size : 40
hadoop.security.dns.log-slow-lookups.enabled : false
mapreduce.jobhistory.webapp.https.address : masternode:19890
file.client-write-packet-size : 65536
fs.s3a.s3guard.ddb.table.capacity.read : 500
ipc.client.ping : true
hadoop.proxyuser.oozie.hosts : *
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms : 30000
dfs.client.failover.max.attempts : 15
dfs.balancer.max-no-move-interval : 60000
yarn.nodemanager.opportunistic-containers-use-pause-for-preemption : false
yarn.nodemanager.webapp.cross-origin.enabled : false
mapreduce.job.encrypted-intermediate-data : false
dfs.client.read.shortcircuit.streams.cache.expiry.ms : 300000
yarn.minicluster.control-resource-monitoring : false
dfs.disk.balancer.enabled : false
dfs.webhdfs.oauth2.enabled : false
yarn.nodemanager.health-checker.script.timeout-ms : 1200000
yarn.resourcemanager.fs.state-store.num-retries : 0
hadoop.ssl.require.client.cert : false
mapreduce.jobhistory.keytab : /etc/security/keytab/jhs.service.keytab
hadoop.security.uid.cache.secs : 14400
yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election
yarn.intermediate-data-encryption.enable : false
mapreduce.job.speculative.speculative-cap-running-tasks : 0.1
dfs.datanode.block.id.layout.upgrade.threads : 12
dfs.client.context : default
yarn.system-metrics-publisher.enabled : false
dfs.namenode.delegation.token.renew-interval : 86400000
yarn.timeline-service.entity-group-fs-store.app-cache-size : 10
fs.AbstractFileSystem.s3a.impl : org.apache.hadoop.fs.s3a.S3A
mapreduce.job.redacted-properties : fs.s3a.access.key,fs.s3a.secret.key,fs.adl.oauth2.credential,dfs.adls.oauth2.credential,fs.azure.account.oauth2.client.secret
yarn.client.load.resource-types.from-server : false
ipc.client.tcpnodelay : true
hadoop.proxyuser.httpfs.groups : *
yarn.resourcemanager.metrics.runtime.buckets : 60,300,1440
dfs.blockreport.intervalMsec : 21600000
dfs.datanode.oob.timeout-ms : 1500,0,0,0
yarn.client.application-client-protocol.poll-timeout-ms : -1
zlib.compress.level : DEFAULT_COMPRESSION
mapreduce.job.sharedcache.mode : disabled
io.map.index.skip : 0
mapreduce.job.hdfs-servers : ${fs.defaultFS}
mapreduce.map.output.compress : true
hadoop.security.kms.client.encrypted.key.cache.num.refill.threads : 2
dfs.namenode.edekcacheloader.interval.ms : 1000
mapreduce.task.merge.progress.records : 10000
yarn.nodemanager.aux-services.mapreduce_shuffle.class : org.apache.hadoop.mapred.ShuffleHandler
dfs.namenode.missing.checkpoint.periods.before.shutdown : 3
tfile.fs.output.buffer.size : 262144
dfs.client.failover.connection.retries : 0
fs.du.interval : 600000
dfs.edit.log.transfer.timeout : 30000
dfs.namenode.top.window.num.buckets : 10
dfs.data.transfer.server.tcpnodelay : true
hadoop.zk.retry-interval-ms : 1000
yarn.sharedcache.uploader.server.address : 0.0.0.0:8046
dfs.http.client.failover.max.attempts : 15
fs.s3a.socket.send.buffer : 8192
dfs.client.block.write.locateFollowingBlock.retries : 7
hadoop.registry.zk.quorum : localhost:2181
mapreduce.jvm.system-properties-to-log : os.name,os.version,java.home,java.runtime.version,java.vendor,java.version,java.vm.name,java.class.path,java.io.tmpdir,user.dir,user.name
hadoop.http.cross-origin.allowed-origins : *
dfs.namenode.enable.retrycache : true
dfs.datanode.du.reserved : 0
hadoop.registry.system.acls : sasl:yarn@, sasl:mapred@, sasl:hdfs@
yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidia-docker-v1.endpoint : http://localhost:3476/v1.0/docker/cli
mapreduce.job.encrypted-intermediate-data.buffer.kb : 128
dfs.data.transfer.client.tcpnodelay : true
yarn.resourcemanager.webapp.xfs-filter.xframe-options : SAMEORIGIN
mapreduce.admin.user.env : LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH
mapreduce.task.profile.reduce.params : ${mapreduce.task.profile.params}
mapreduce.reduce.memory.mb : 0
hadoop.caller.context.enabled : false
hadoop.http.authentication.kerberos.principal : HTTP/_HOST@LOCALHOST
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb : 0
dfs.qjournal.prepare-recovery.timeout.ms : 120000
dfs.datanode.transferTo.allowed : true
oozie.action.rootlogger.log.level : INFO
hadoop.security.sensitive-config-keys :
secret$
password$
ssl.keystore.pass$
fs.s3.*[Ss]ecret.?[Kk]ey
fs.s3a.*.server-side-encryption.key
fs.azure.account.key.*
credential$
oauth.*secret
oauth.*password
oauth.*token
hadoop.security.sensitive-config-keys
mapreduce.client.completion.pollinterval : 5000
dfs.namenode.name.dir.restore : false
dfs.namenode.full.block.report.lease.length.ms : 300000
dfs.namenode.secondary.http-address : 0.0.0.0:9868
hadoop.http.logs.enabled : true
hadoop.security.group.mapping.ldap.read.timeout.ms : 60000
yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory : 10
dfs.namenode.delegation.token.always-use : false
yarn.resourcemanager.webapp.https.address : masternode:8090
fs.s3a.retry.throttle.limit : ${fs.s3a.attempts.maximum}
dfs.client.read.striped.threadpool.size : 18
mapreduce.job.cache.limit.max-resources : 0
hadoop.proxyuser.HTTP.groups : *
--------------------
Setting up log4j2
log4j2 configuration file created at /yarn/nm/usercache/admin/appcache/application_1560674082717_0001/container_1560674082717_0001_01_000001/sqoop-log4j2.xml
Sqoop command arguments :
import
\
--connect
'jdbc:sqlserver://myServer;database=myDB'
\
--username
myUsername
--password
********
\
--table
category
-m
1
--check-column
LastEditOn
\
--merge-key
'Reference
ID'
\
--incremental
lastmodified
\
--compression-codec=snappy
\
--target-dir
/user/hive/warehouse/myDB.db/category
\
--hive-table
category
\
--map-column-hive
LastEditOn=timestamp,CreatedOn=timestamp
\
--fields-terminated-by
'\001'
--fields-terminated-by
'\n'
Fetching child yarn jobs
tag id : oozie-564a124254f1fd53cb03553181f7e603
No child applications found
=================================================================
>>> Invoking Sqoop command line now >>>
<<< Invocation of Sqoop command completed <<<
No child hadoop job is executed.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410)
at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55)
at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217)
at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
Caused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57)
at java.lang.Runtime.exit(Runtime.java:107)
at java.lang.System.exit(System.java:971)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:214)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:199)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:104)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:51)
... 16 more
Intercepting System.exit(1)
Failing Oozie Launcher, Main Class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
Oozie Launcher, uploading action data to HDFS sequence file: hdfs://NameServiceOne/user/admin/oozie-oozi/0000000-190616123600049-oozie-oozi-W/sqoop-c9e7--sqoop/action-data.seq
12:41:09.783 [main] INFO org.apache.hadoop.io.compress.CodecPool - Got brand-new compressor [.deflate]
Stopping AM
12:41:09.983 [main] INFO org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl - Waiting for application to be successfully unregistered.
Callback notification attempts left 0
Callback notification trying http://masternode:11000/oozie/callback?id=0000000-190616123600049-oozie-oozi-W@sqoop-c9e7&status=FAILED
Callback notification to http://masternode:11000/oozie/callback?id=0000000-190616123600049-oozie-oozi-W@sqoop-c9e7&status=FAILED succeeded
Callback notification succeeded
... View more
06-12-2019
01:39 AM
Hello Eric .... any solution for this?! I have version CHD 6.2 And I am trying to run it from Hue
... View more
05-29-2019
03:11 AM
Sorry, but where I can find the workflow.xml and the job.properties files? And the following is the Sqoop import command I am trying to execute: sqoop import \
--connect 'jdbc:sqlserver://myURL;database=myDB' \
--username user --password pass \
--table BigDataTest -m 1 --check-column lastmodified \
--merge-key id \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db.db/bigdatatest \
--hive-table bigDataTest \
--map-column-java lastmodified=String \
--class-name BigDataTest \
--fields-terminated-by '\001' --fields-terminated-by '\n'
... View more
05-27-2019
04:03 AM
Hello, When I try to run a Sqoop import command through Hue, the job is KILLED always, and I get the following errors in the log: Caused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57) then after that: Failing Oozie Launcher, Main Class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] any solution and explanation why this is happening?
... View more
Labels:
- Labels:
-
Apache Sqoop
-
Cloudera Hue
05-27-2019
03:36 AM
Hello, when I try to run a sqoop import command I am getting the following error: Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=butmah, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x how to solve this ? tthanks
... View more
Labels:
- Labels:
-
Apache Sqoop
-
HDFS
05-21-2019
10:54 PM
@denloe thank you. First: to answer you question, yes, the "cloudera-manager.list" is there in "/etc/apt/sources.list.d". Second: I have tried to install the Cloudera Manager Agent manually but I got the error of: sudo apt-get install cloudera-manager-agent cloudera-manager-daemons
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package cloudera-manager-agent
E: Unable to locate package cloudera-manager-daemons So I have decided to configure the repository on the host, manually, and from scratch; so: I have deleted the existing "cloudera-manager.list" sudo rm /etc/apt/sources.list.d/cloudera-manager.list Then copy the one that I have one my cloudera manager server. Then followed what is in "Step 1: Configure a Repository" wget https://archive.cloudera.com/cm6/6.2.0/ubuntu1604/apt/archive.key
sudo apt-key add archive.key
sudo apt-get update After that "Manually Install Cloudera Manager Agent Packages" sudo apt-get install cloudera-manager-agent cloudera-manager-daemons then add/modify the server_host name in "/etc/cloudera-scm-agent/config.ini" of the new host. And finally start the agent: sudo systemctl start cloudera-scm-agent And guess what! ... it is working now, and I could add the host to the cluster! I think there is something wrong with the installer of 6.2. And that wrong this is not just in installing the cloudera manager agent, but also in installing the Oracle JDK, which gives an error message that the Oracle JDK package does not exist! This Oracle JDK installation issue forced me to Manually Installing OpenJDK on the new hosts, and that caused another problem! Now I am having Oracle JDK 1.8 on my cloudera server master node, but "openjdk version 1.8.0_212" on the other nodes. And whenever I add a new host I got a warning that there is inconsistency in java and that will cause failures! .... now my question is how can I turn my cloudera server master node to "openjdk version 1.8.0_212"? is it just Manually Installing OpenJDK and this will take the place of the existing Oracle JDK 1.8? or I have to do cleanups before that, and more configurations after that?
... View more