Member since
04-02-2019
36
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6867 | 05-21-2019 10:54 PM | |
10635 | 05-15-2019 10:50 PM |
06-22-2019
02:05 AM
I am also searching for it on the internet. I've found a couple of people who are having the same, trying to follow there solution but unfortunately didn't work ... one of them is: http://morecoder.com/article/1097655.html I have tried many things and changed a couple of configurations, so I am not sure if we are on the same page or no... anyways this is the stderr I am getting now: Log Type: stderr
Log Upload Time: Sat Jun 22 11:57:38 +0400 2019
Log Length: 937
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/159/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/yarn/nm/filecache/23/3.0.0-cdh6.2.0-mr-framework.tar.gz/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
Try --help for usage instructions.
... View more
06-20-2019
12:41 AM
ok ... this is another try on a different table: <sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>masternode:8032</job-tracker>
<name-node>hdfs://NameServiceOne</name-node>
<command>import \
--connect 'jdbc:sqlserver://11.11.11.11;database=SQL_Training' \
--username SQL_Training_user --password SQL_Training_user \
--table BigDataTest -m 1 --check-column lastmodified \
--merge-key id \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db_atlas_jrtf.db/BigDataTest \
--hive-table BigDataTest \
--map-column-hive lastmodified=timestamp \
--fields-terminated-by '\001' --fields-terminated-by '\n'</command>
<configuration />
</sqoop> but same error!
... View more
06-19-2019
11:31 PM
I hope this is what you want for the workflow: <workflow-app name="Batch job for query-sqoop1" xmlns="uri:oozie:workflow:0.5">
<start to="sqoop-fde5"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="sqoop-fde5">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>import \
--connect 'jdbc:sqlserver://11.11.11.11;database=DBXYZ' \
--username theUser --password thePassword \
--table category -m 1 --check-column LastEditOn \
--merge-key 'Reference ID' \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db_atlas_jrtf.db/category \
--hive-table category \
--map-column-hive LastEditOn=timestamp,CreatedOn=timestamp \
--fields-terminated-by '\001' --fields-terminated-by '\n'</command>
</sqoop>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app> For the job configuration, I am really not sure where to find it. The one I can reach is a lot of scrolls that can't be taken screenshot in anyway.... so would you please give me the path to the job configuration ?
... View more
06-19-2019
01:03 AM
I am sorry but there is nothing wrong with the syntax, as if I run it on the terminal it completes successfully. I have a doubt regarding security, and that is because of the following line in the log aused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57) I would like to note also that this is the first try to run sqoop script on Hue after a fresh installation of cdh 6.2... so I am afraid there is something that I've missed in the configuration, but I really can't find it 😞
... View more
06-16-2019
01:56 AM
That is a very long log.... This forum does not allow more than 50K char.s .... The following is the last 50K char.s of the log generated: dfs.namenode.checkpoint.dir : file://${hadoop.tmp.dir}/dfs/namesecondary
dfs.webhdfs.rest-csrf.browser-useragents-regex : ^Mozilla.*,^Opera.*
dfs.namenode.top.windows.minutes : 1,5,25
dfs.client.use.legacy.blockreader.local : false
mapreduce.job.maxtaskfailures.per.tracker : 3
mapreduce.shuffle.max.connections : 0
net.topology.node.switch.mapping.impl : org.apache.hadoop.net.ScriptBasedMapping
hadoop.kerberos.keytab.login.autorenewal.enabled : false
yarn.client.application-client-protocol.poll-interval-ms : 200
mapreduce.fileoutputcommitter.marksuccessfuljobs : true
yarn.nodemanager.localizer.address : ${yarn.nodemanager.hostname}:8040
dfs.namenode.list.cache.pools.num.responses : 100
nfs.server.port : 2049
dfs.namenode.https-address.NameServiceOne.namenode417 : masternode:9871
hadoop.proxyuser.HTTP.hosts : *
dfs.checksum.type : CRC32C
fs.s3a.readahead.range : 64K
dfs.client.read.short.circuit.replica.stale.threshold.ms : 1800000
dfs.ha.namenodes.NameServiceOne : namenode417,namenode434
ha.zookeeper.parent-znode : /hadoop-ha
yarn.sharedcache.admin.thread-count : 1
yarn.nodemanager.resource.cpu-vcores : -1
mapreduce.jobhistory.http.policy : HTTP_ONLY
fs.s3a.attempts.maximum : 20
dfs.datanode.lazywriter.interval.sec : 60
yarn.log-aggregation.retain-check-interval-seconds : -1
yarn.resourcemanager.node-ip-cache.expiry-interval-secs : -1
yarn.timeline-service.client.fd-clean-interval-secs : 60
fs.wasbs.impl : org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure
dfs.federation.router.reader.count : 1
hadoop.ssl.keystores.factory.class : org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
hadoop.zk.num-retries : 1000
mapreduce.job.split.metainfo.maxsize : 10000000
hadoop.security.random.device.file.path : /dev/urandom
yarn.client.nodemanager-connect.max-wait-ms : 180000
yarn.app.mapreduce.client-am.ipc.max-retries : 3
dfs.namenode.snapshotdiff.allow.snap-root-descendant : true
yarn.nodemanager.container-diagnostics-maximum-size : 10000
yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage : false
dfs.namenode.ec.system.default.policy : RS-6-3-1024k
dfs.replication.max : 512
dfs.datanode.https.address : 0.0.0.0:9865
dfs.ha.standby.checkpoints : true
ipc.client.kill.max : 10
mapreduce.job.committer.setup.cleanup.needed : true
dfs.client.domain.socket.data.traffic : false
yarn.nodemanager.localizer.cache.target-size-mb : 10240
yarn.resourcemanager.admin.client.thread-count : 1
hadoop.security.group.mapping.ldap.connection.timeout.ms : 60000
yarn.timeline-service.store-class : org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
yarn.resourcemanager.nm-container-queuing.queue-limit-stdev : 1.0f
yarn.resourcemanager.zk-appid-node.split-index : 0
hadoop.tmp.dir : /tmp/hadoop-${user.name}
dfs.domain.socket.disable.interval.seconds : 1
fs.s3a.etag.checksum.enabled : false
hadoop.security.kms.client.failover.sleep.base.millis : 100
yarn.node-labels.configuration-type : centralized
fs.s3a.retry.interval : 500ms
dfs.datanode.http.internal-proxy.port : 0
yarn.timeline-service.ttl-ms : 604800000
mapreduce.task.exit.timeout.check-interval-ms : 20000
oozie.sqoop.args.7 : \
--table
oozie.sqoop.args.8 : category
mapreduce.map.speculative : false
oozie.sqoop.args.5 : --password
oozie.sqoop.args.6 : myUsername
yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms : 1000
yarn.timeline-service.recovery.enabled : false
oozie.sqoop.args.9 : -m
yarn.nodemanager.recovery.dir : ${hadoop.tmp.dir}/yarn-nm-recovery
mapreduce.job.counters.max : 120
dfs.namenode.name.cache.threshold : 10
oozie.sqoop.args.0 : import
dfs.namenode.caching.enabled : true
dfs.namenode.max.full.block.report.leases : 6
oozie.sqoop.args.3 : \
--username
yarn.nodemanager.linux-container-executor.cgroups.delete-delay-ms : 20
dfs.namenode.max.extra.edits.segments.retained : 10000
oozie.sqoop.args.4 : myUsername
dfs.webhdfs.user.provider.user.pattern : ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
yarn.webapp.ui2.enable : false
oozie.sqoop.args.1 : \
--connect
oozie.sqoop.args.2 : 'jdbc:sqlserver://myServer;database=myDB'
dfs.client.mmap.enabled : true
mapreduce.map.log.level : INFO
dfs.datanode.ec.reconstruction.threads : 8
hadoop.fuse.timer.period : 5
yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms : 1000
hadoop.zk.timeout-ms : 10000
ha.health-monitor.check-interval.ms : 1000
dfs.client.hedged.read.threshold.millis : 500
yarn.resourcemanager.fs.state-store.retry-interval-ms : 1000
mapreduce.output.fileoutputformat.compress : false
yarn.sharedcache.store.in-memory.staleness-period-mins : 10080
dfs.client.write.byte-array-manager.count-limit : 2048
mapreduce.application.framework.path : hdfs://NameServiceOne//user/yarn/mapreduce/mr-framework/3.0.0-cdh6.2.0-mr-framework.tar.gz#mr-framework
hadoop.security.group.mapping.providers.combined : true
fs.AbstractFileSystem.har.impl : org.apache.hadoop.fs.HarFs
mapreduce.job.running.map.limit : 0
yarn.nodemanager.webapp.address : ${yarn.nodemanager.hostname}:8042
mapreduce.reduce.input.buffer.percent : 0.0
mapreduce.job.cache.files : hdfs://NameServiceOne/user/hue/oozie/deployments/_admin_-oozie-312-1560674439.04/lib/hive-site.xml#hive-site.xml,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-exec-core.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-security-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/slider-core-0.90.2-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/aopalliance-repackaged-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jcodings-1.0.18.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/fst-2.50.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ST4-4.0.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-shaded-protobuf.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims-0.23.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ecj-4.4.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/libthrift-0.9.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-crypto-1.0.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-server-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-schemas-3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/transaction-api-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-webapp-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/joda-time-2.9.9.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/metrics-core-3.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-hcatalog-core.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-tez.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/asm-commons-6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-container-servlet-core-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/HikariCP-java7-2.4.12.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-data-core.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hk2-utils-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-data-hive.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/stringtemplate-3.2.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/calcite-core-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/bonecp-0.8.0.RELEASE.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-core-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/htrace-core4-4.1.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/tephra-api-0.6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/curator-client-2.7.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-encoding.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avro-ipc.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-cli.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/fastutil-7.2.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/guava-11.0.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-api-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-avro.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/oro-2.0.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-lang-2.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.servlet.jsp-api-2.3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/asm-tree-6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avro.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/bcpkix-jdk15on-1.60.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/snappy-0.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-pool-1.5.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jpam-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-servlet-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-client.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/osgi-resource-locator-1.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-annotations-2.9.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/httpcore-4.4.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-archives.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/slf4j-api-1.7.25.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/tephra-hbase-compat-1.0-0.6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hsqldb-1.8.0.10.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-data-mapreduce.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-resourcemanager.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-jaas-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/calcite-linq4j-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-codec-1.9.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/gson-2.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-protocol-shaded.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-hadoop.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hk2-api-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jta-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-dbcp-1.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-common-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/aggdesigner-algorithm-6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/metrics-json-3.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/opencsv-2.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.inject-1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-column.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-core-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.ws.rs-api-2.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hk2-locator-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javolution-5.5.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/zookeeper.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-web-proxy.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-ant.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/groovy-all-2.4.11.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-hadoop-bundle.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-protocol.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/aopalliance-1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-http-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-api-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/mssql-jdbc-6.2.1.jre7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-common-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/joni-2.1.11.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/antlr-2.7.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/apache-jstl-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-zookeeper.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-jndi-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/libfb303-0.9.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/snappy-java-1.1.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.el-3.0.1-b11.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-client-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-plus-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-httpclient-3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-shaded-netty.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/re2j-1.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/json-io-2.5.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javassist-3.20.0-GA.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-media-jaxb-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/apache-curator-2.12.0.pom#apache-curator-2.12.0.pom,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-classification.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-databind-2.9.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-replication.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-storage-api.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.servlet-api-3.1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-client-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.annotation-api-1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/curator-framework-2.7.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-shims-scheduler.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-server.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-client.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-metrics.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/bcprov-jdk15on-1.60.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-procedure.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-rewrite-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-serde.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/taglibs-standard-spec-1.2.5.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/guice-3.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/calcite-druid-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/sqoop.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.jdo-3.2.0-m3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/datanucleus-core-4.1.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/metrics-jvm-3.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/logredactor-2.0.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-web-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/apache-jsp-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/java-util-1.9.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-metrics-api.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jcommander-1.30.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ant-1.9.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-metastore.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-runner-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/HikariCP-2.6.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-compress-1.9.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/guice-assistedinject-3.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-slf4j-impl-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ehcache-3.3.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-hadoop2-compat.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/httpclient-4.5.3.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-mapreduce.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.inject-2.5.0-b32.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/oozie-sharelib-sqoop.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/objenesis-1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-common-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-server-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/velocity-1.5.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/janino-2.7.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-http.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/stax-api-1.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-lang3-3.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avatica-1.12.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/antlr-runtime-3.4.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-registry.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-server.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/leveldbjni-all-1.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-util-ajax-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jdo-api-3.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jline-2.12.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/taglibs-standard-impl-1.2.5.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-core-asl-1.9.13.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-io-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ant-launcher-1.9.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/audience-annotations-0.5.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/kite-hadoop-compatibility.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-server-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/javax.servlet.jsp-2.3.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/derby-10.14.1.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/findbugs-annotations-1.3.9-1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-discovery-core-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/xz-1.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/geronimo-jcache_1.0_spec-1.0-alpha-1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-discovery-api-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-hadoop-compat.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/datanucleus-api-jdo-4.2.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/netty-3.10.6.Final.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-xml-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/ivy-2.4.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/paranamer-2.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-compiler-2.7.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-client-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hbase-shaded-miscellaneous.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/avro-mapred-hadoop2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hadoop-yarn-server-applicationhistoryservice.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/websocket-servlet-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/datanucleus-rdbms-4.1.7.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/tephra-core-0.6.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-service-rpc.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jsr305-3.0.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-llap-common.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-mapper-asl-1.9.13-cloudera.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/json-20090211.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/validation-api-1.1.0.Final.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-service.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-1.2-api-2.8.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jersey-guava-2.25.1.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-zookeeper-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jackson-core-2.9.8.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-format.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/log4j-1.2.17.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/oozie-sharelib-sqoop-5.1.0-cdh6.2.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/jetty-annotations-9.3.25.v20180904.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/commons-io-2.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/parquet-jackson.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/twill-api-0.6.0-incubating.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/disruptor-3.3.6.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/sqoop/hive-orc.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/oozie/oozie-sharelib-oozie-5.1.0-cdh6.2.0.jar,hdfs://NameServiceOne/user/oozie/share/lib/lib_20190521153117/oozie/oozie-sharelib-oozie.jar
dfs.client.slow.io.warning.threshold.ms : 30000
fs.s3a.multipart.size : 100M
yarn.app.mapreduce.am.job.committer.commit-window : 10000
dfs.qjournal.new-epoch.timeout.ms : 120000
yarn.timeline-service.webapp.rest-csrf.enabled : false
hadoop.proxyuser.flume.hosts : *
dfs.namenode.edits.asynclogging : true
yarn.timeline-service.reader.class : org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl
yarn.app.mapreduce.am.staging-dir.erasurecoding.enabled : false
dfs.blockreport.incremental.intervalMsec : 0
dfs.datanode.network.counts.cache.max.size : 2147483647
dfs.namenode.https-address.NameServiceOne.namenode434 : node3:9871
yarn.timeline-service.writer.class : org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl
mapreduce.ifile.readahead : true
dfs.qjournal.get-journal-state.timeout.ms : 120000
yarn.timeline-service.entity-group-fs-store.summary-store : org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
dfs.client.socketcache.capacity : 16
fs.s3a.s3guard.ddb.table.create : false
dfs.client.retry.policy.spec : 10000,6,60000,10
mapreduce.output.fileoutputformat.compress.codec : org.apache.hadoop.io.compress.DefaultCodec
fs.s3a.socket.recv.buffer : 8192
dfs.datanode.fsdatasetcache.max.threads.per.volume : 4
dfs.namenode.reencrypt.batch.size : 1000
yarn.sharedcache.store.in-memory.initial-delay-mins : 10
mapreduce.jobhistory.webapp.address : masternode:19888
fs.adl.impl : org.apache.hadoop.fs.adl.AdlFileSystem
fs.AbstractFileSystem.gs.impl : com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS
mapreduce.task.userlog.limit.kb : 0
fs.s3a.connection.ssl.enabled : true
yarn.router.rmadmin.interceptor-class.pipeline : org.apache.hadoop.yarn.server.router.rmadmin.DefaultRMAdminRequestInterceptor
yarn.sharedcache.webapp.address : 0.0.0.0:8788
hadoop.fuse.connection.timeout : 300
dfs.http.client.retry.policy.spec : 10000,6,60000,10
yarn.resourcemanager.rm.container-allocation.expiry-interval-ms : 600000
ipc.server.max.connections : 0
yarn.app.mapreduce.am.resource.mb : 3072
hadoop.security.groups.cache.secs : 300
dfs.datanode.peer.stats.enabled : false
dfs.replication : 3
mapreduce.shuffle.transfer.buffer.size : 131072
dfs.namenode.audit.log.async : false
hadoop.security.group.mapping.ldap.directory.search.timeout : 10000
dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold : 10737418240
dfs.datanode.disk.check.timeout : 10m
yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts : 3
fs.s3a.committer.threads : 8
dfs.checksum.combine.mode : MD5MD5CRC
yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs : 3600
yarn.scheduler.maximum-allocation-vcores : 6
yarn.nodemanager.sleep-delay-before-sigkill.ms : 250
fs.AbstractFileSystem.abfs.impl : org.apache.hadoop.fs.azurebfs.Abfs
mapreduce.job.acl-modify-job :
fs.automatic.close : true
fs.azure.sas.expiry.period : 90d
dfs.qjm.operations.timeout : 60s
hadoop.proxyuser.httpfs.hosts : *
dfs.namenode.stale.datanode.minimum.interval : 3
dfs.namenode.reencrypt.edek.threads : 10
dfs.federation.router.store.membership.expiration : 300000
hadoop.security.groups.cache.background.reload.threads : 3
mapreduce.input.fileinputformat.list-status.num-threads : 1
hadoop.security.group.mapping.ldap.posix.attr.gid.name : gidNumber
dfs.namenode.acls.enabled : false
dfs.client.short.circuit.replica.stale.threshold.ms : 1800000
dfs.namenode.resource.du.reserved : 104857600
dfs.federation.router.connection.clean.ms : 10000
dfs.client.server-defaults.validity.period.ms : 3600000
dfs.federation.router.metrics.class : org.apache.hadoop.hdfs.server.federation.metrics.FederationRPCPerformanceMonitor
mapreduce.shuffle.listen.queue.size : 128
mapreduce.jobhistory.intermediate-done-dir : ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate
mapreduce.client.libjars.wildcard : true
dfs.federation.router.cache.ttl : 60000
yarn.nodemanager.recovery.compaction-interval-secs : 3600
dfs.namenode.edits.noeditlogchannelflush : false
mapreduce.reduce.shuffle.input.buffer.percent : 0.70
yarn.http.policy : HTTP_ONLY
mapreduce.map.maxattempts : 4
dfs.namenode.audit.loggers : default
io.serializations : org.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
hadoop.security.groups.cache.warn.after.ms : 5000
dfs.client.write.byte-array-manager.count-reset-time-period-ms : 10000
yarn.nodemanager.webapp.rest-csrf.custom-header : X-XSRF-Header
yarn.app.mapreduce.am.admin.user.env : LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH
dfs.namenode.snapshot.capture.openfiles : true
yarn.node-labels.fs-store.impl.class : org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore
hadoop.http.cross-origin.allowed-methods : GET,POST,HEAD
dfs.qjournal.queued-edits.limit.mb : 10
mapreduce.jobhistory.webapp.rest-csrf.enabled : false
dfs.http.policy : HTTP_ONLY
dfs.balancer.max-size-to-move : 10737418240
dfs.datanode.sync.behind.writes.in.background : false
hadoop.zk.acl : world:anyone:rwcda
yarn.nodemanager.container.stderr.pattern : {*stderr*,*STDERR*}
dfs.namenode.reencrypt.throttle.limit.updater.ratio : 1.0
mapreduce.cluster.local.dir : ${hadoop.tmp.dir}/mapred/local
hadoop.kerberos.kinit.command : kinit
dfs.namenode.secondary.https-address : 0.0.0.0:9869
dfs.namenode.metrics.logger.period.seconds : 600
dfs.block.access.token.lifetime : 600
dfs.ha.automatic-failover.enabled.NameServiceOne : true
dfs.namenode.delegation.token.max-lifetime : 604800000
dfs.datanode.drop.cache.behind.writes : false
dfs.mover.address : 0.0.0.0:0
dfs.block.placement.ec.classname : org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant
dfs.namenode.num.extra.edits.retained : 1000000
ipc.client.connect.max.retries.on.timeouts : 45
fs.viewfs.rename.strategy : SAME_MOUNTPOINT
fs.client.resolve.topology.enabled : false
hadoop.proxyuser.hive.hosts : *
yarn.resourcemanager.node-labels.provider.fetch-interval-ms : 1800000
yarn.nodemanager.container-metrics.enable : true
mapreduce.job.map.output.collector.class : org.apache.hadoop.mapred.MapTask$MapOutputBuffer
fs.s3a.fast.upload.buffer : disk
ha.health-monitor.connect-retry-interval.ms : 1000
dfs.namenode.edekcacheloader.initial.delay.ms : 3000
dfs.edit.log.transfer.bandwidthPerSec : 0
dfs.ha.tail-edits.in-progress : false
dfs.federation.router.heartbeat.interval : 5000
ssl.client.truststore.reload.interval : 10000
dfs.client.datanode-restart.timeout : 30s
io.mapfile.bloom.size : 1048576
hadoop.security.kms.client.authentication.retry-count : 1
dfs.client-write-packet-size : 65536
fs.ftp.data.connection.mode : ACTIVE_LOCAL_DATA_CONNECTION_MODE
fs.swift.impl : org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem
yarn.resourcemanager.webapp.rest-csrf.methods-to-ignore : GET,OPTIONS,HEAD
mapreduce.job.max.map : -1
yarn.app.mapreduce.shuffle.log.backups : 0
ftp.blocksize : 67108864
dfs.namenode.kerberos.principal.pattern : *
yarn.resourcemanager.scheduler.monitor.enable : false
dfs.webhdfs.socket.connect-timeout : 60s
dfs.namenode.replication.max-streams : 2
nfs.allow.insecure.ports : true
yarn.sharedcache.nm.uploader.thread-count : 20
dfs.federation.router.admin.enable : true
yarn.app.mapreduce.client.job.retry-interval : 2000
yarn.scheduler.configuration.store.max-logs : 1000
hadoop.security.authorization : false
yarn.timeline-service.version : 1.0f
yarn.am.liveness-monitor.expiry-interval-ms : 600000
fs.har.impl.disable.cache : true
hadoop.proxyuser.hdfs.hosts : *
mapreduce.job.reduce.slowstart.completedmaps : 0.8
yarn.timeline-service.leveldb-timeline-store.path : ${hadoop.tmp.dir}/yarn/timeline
dfs.namenode.upgrade.domain.factor : ${dfs.replication}
mapreduce.jobhistory.minicluster.fixed.ports : false
mapreduce.application.classpath : $HADOOP_CLIENT_CONF_DIR,$PWD/mr-framework/*,$MR2_CLASSPATH
yarn.resourcemanager.delegation.token.max-lifetime : 604800000
yarn.resourcemanager.ha.automatic-failover.enabled : true
mapreduce.reduce.java.opts : -Djava.net.preferIPv4Stack=true
dfs.datanode.socket.write.timeout : 480000
dfs.namenode.accesstime.precision : 3600000
dfs.namenode.redundancy.considerLoad.factor : 2.0
yarn.resourcemanager.store.class : org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
io.mapfile.bloom.error.rate : 0.005
yarn.nodemanager.webapp.rest-csrf.enabled : false
yarn.timeline-service.leveldb-state-store.path : ${hadoop.tmp.dir}/yarn/timeline
hadoop.proxyuser.hive.groups : *
dfs.federation.router.rpc-address : 0.0.0.0:8888
fs.s3a.committer.staging.unique-filenames : true
dfs.namenode.support.allow.format : true
yarn.scheduler.configuration.zk-store.parent-path : /confstore
dfs.content-summary.limit : 5000
yarn.timeline-service.writer.flush-interval-seconds : 60
yarn.nodemanager.container-executor.class : org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
dfs.namenode.posix.acl.inheritance.enabled : true
dfs.datanode.outliers.report.interval : 30m
hadoop.security.kms.client.encrypted.key.cache.low-watermark : 0.3f
dfs.namenode.top.enabled : true
yarn.app.mapreduce.shuffle.log.separate : true
hadoop.user.group.static.mapping.overrides : dr.who=;
dfs.federation.router.http-address : 0.0.0.0:50071
fs.s3a.retry.throttle.interval : 1000ms
yarn.nodemanager.amrmproxy.address : 0.0.0.0:8049
mapreduce.jobhistory.webapp.rest-csrf.custom-header : X-XSRF-Header
yarn.webapp.xfs-filter.enabled : true
dfs.client.cached.conn.retry : 3
dfs.client.key.provider.cache.expiry : 864000000
dfs.namenode.path.based.cache.refresh.interval.ms : 30000
yarn.nodemanager.collector-service.thread-count : 5
dfs.block.replicator.classname : org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
dfs.namenode.fs-limits.max-directory-items : 1048576
dfs.ha.log-roll.period : 120s
yarn.nodemanager.runtime.linux.docker.capabilities : CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE
yarn.nodemanager.distributed-scheduling.enabled : false
ipc.client.fallback-to-simple-auth-allowed : false
yarn.minicluster.fixed.ports : false
yarn.nodemanager.remote-app-log-dir : /tmp/logs
yarn.timeline-service.entity-group-fs-store.scan-interval-seconds : 60
dfs.xframe.enabled : true
yarn.nodemanager.resource.percentage-physical-cpu-limit : 100
mapreduce.job.tags : oozie-564a124254f1fd53cb03553181f7e603
dfs.namenode.fs-limits.max-xattr-size : 16384
dfs.datanode.http.address : 0.0.0.0:9864
dfs.namenode.blocks.per.postponedblocks.rescan : 10000
fs.s3a.s3guard.cli.prune.age : 86400000
dfs.web.authentication.filter : org.apache.hadoop.hdfs.web.AuthFilter
dfs.namenode.maintenance.replication.min : 1
hadoop.jetty.logs.serve.aliases : true
dfs.webhdfs.ugi.expire.after.access : 600000
dfs.namenode.max.op.size : 52428800
mapreduce.jobhistory.admin.acl : *
mapreduce.job.reducer.unconditional-preempt.delay.sec : 300
yarn.app.mapreduce.am.hard-kill-timeout-ms : 10000
yarn.resourcemanager.display.per-user-apps : false
yarn.resourcemanager.node-removal-untracked.timeout-ms : 60000
yarn.resourcemanager.webapp.address : masternode:8088
mapreduce.jobhistory.recovery.enable : false
yarn.sharedcache.store.in-memory.check-period-mins : 720
dfs.client.test.drop.namenode.response.number : 0
fs.df.interval : 60000
fs.s3a.assumed.role.session.duration : 30m
mapreduce.job.cache.limit.max-single-resource-mb : 0
yarn.timeline-service.enabled : false
dfs.disk.balancer.block.tolerance.percent : 10
dfs.webhdfs.netty.high.watermark : 65535
mapreduce.task.profile : false
hadoop.http.cross-origin.allowed-headers : X-Requested-With,Content-Type,Accept,Origin
yarn.router.webapp.address : 0.0.0.0:8089
dfs.datanode.balance.max.concurrent.moves : 50
yarn.nodemanager.hostname : 0.0.0.0
mapreduce.task.exit.timeout : 60000
yarn.resourcemanager.nm-container-queuing.max-queue-length : 15
mapreduce.job.token.tracking.ids.enabled : false
yarn.scheduler.increment-allocation-mb : 512
fs.s3a.assumed.role.credentials.provider : org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
fs.azure.authorization.caching.enable : true
hadoop.security.kms.client.failover.sleep.max.millis : 2000
dfs.client.mmap.retry.timeout.ms : 300000
yarn.resourcemanager.webapp.rest-csrf.custom-header : X-XSRF-Header
yarn.resourcemanager.nm-container-queuing.max-queue-wait-time-ms : 100
mapreduce.jobhistory.move.thread-count : 3
dfs.permissions.enabled : true
fs.AbstractFileSystem.hdfs.impl : org.apache.hadoop.fs.Hdfs
yarn.nodemanager.container-localizer.log.level : INFO
hadoop.http.filter.initializers : org.apache.hadoop.http.lib.StaticUserWebFilter
yarn.timeline-service.http-authentication.simple.anonymous.allowed : true
yarn.nodemanager.runtime.linux.docker.allowed-container-networks : host,none,bridge
dfs.qjournal.accept-recovery.timeout.ms : 120000
yarn.sharedcache.client-server.thread-count : 50
fs.s3a.s3guard.ddb.max.retries : 9
fs.s3a.committer.magic.enabled : false
yarn.resourcemanager.resource-tracker.address : masternode:8031
mapreduce.jobhistory.jobname.limit : 50
dfs.domain.socket.path : /var/run/hdfs-sockets/dn
dfs.namenode.decommission.blocks.per.interval : 500000
dfs.qjournal.write-txns.timeout.ms : 20000
rpc.metrics.quantile.enable : false
yarn.federation.subcluster-resolver.class : org.apache.hadoop.yarn.server.federation.resolver.DefaultSubClusterResolverImpl
dfs.namenode.read-lock-reporting-threshold-ms : 5000
mapreduce.task.timeout : 600000
yarn.nodemanager.resource.memory-mb : -1
dfs.datanode.failed.volumes.tolerated : 0
yarn.nodemanager.disk-health-checker.min-healthy-disks : 0.25
mapreduce.framework.name : yarn
mapreduce.fileoutputcommitter.algorithm.version : 2
yarn.router.clientrm.interceptor-class.pipeline : org.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor
yarn.sharedcache.nested-level : 3
fs.s3a.connection.timeout : 200000
hadoop.caller.context.signature.max.size : 40
hadoop.security.dns.log-slow-lookups.enabled : false
mapreduce.jobhistory.webapp.https.address : masternode:19890
file.client-write-packet-size : 65536
fs.s3a.s3guard.ddb.table.capacity.read : 500
ipc.client.ping : true
hadoop.proxyuser.oozie.hosts : *
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms : 30000
dfs.client.failover.max.attempts : 15
dfs.balancer.max-no-move-interval : 60000
yarn.nodemanager.opportunistic-containers-use-pause-for-preemption : false
yarn.nodemanager.webapp.cross-origin.enabled : false
mapreduce.job.encrypted-intermediate-data : false
dfs.client.read.shortcircuit.streams.cache.expiry.ms : 300000
yarn.minicluster.control-resource-monitoring : false
dfs.disk.balancer.enabled : false
dfs.webhdfs.oauth2.enabled : false
yarn.nodemanager.health-checker.script.timeout-ms : 1200000
yarn.resourcemanager.fs.state-store.num-retries : 0
hadoop.ssl.require.client.cert : false
mapreduce.jobhistory.keytab : /etc/security/keytab/jhs.service.keytab
hadoop.security.uid.cache.secs : 14400
yarn.resourcemanager.ha.automatic-failover.zk-base-path : /yarn-leader-election
yarn.intermediate-data-encryption.enable : false
mapreduce.job.speculative.speculative-cap-running-tasks : 0.1
dfs.datanode.block.id.layout.upgrade.threads : 12
dfs.client.context : default
yarn.system-metrics-publisher.enabled : false
dfs.namenode.delegation.token.renew-interval : 86400000
yarn.timeline-service.entity-group-fs-store.app-cache-size : 10
fs.AbstractFileSystem.s3a.impl : org.apache.hadoop.fs.s3a.S3A
mapreduce.job.redacted-properties : fs.s3a.access.key,fs.s3a.secret.key,fs.adl.oauth2.credential,dfs.adls.oauth2.credential,fs.azure.account.oauth2.client.secret
yarn.client.load.resource-types.from-server : false
ipc.client.tcpnodelay : true
hadoop.proxyuser.httpfs.groups : *
yarn.resourcemanager.metrics.runtime.buckets : 60,300,1440
dfs.blockreport.intervalMsec : 21600000
dfs.datanode.oob.timeout-ms : 1500,0,0,0
yarn.client.application-client-protocol.poll-timeout-ms : -1
zlib.compress.level : DEFAULT_COMPRESSION
mapreduce.job.sharedcache.mode : disabled
io.map.index.skip : 0
mapreduce.job.hdfs-servers : ${fs.defaultFS}
mapreduce.map.output.compress : true
hadoop.security.kms.client.encrypted.key.cache.num.refill.threads : 2
dfs.namenode.edekcacheloader.interval.ms : 1000
mapreduce.task.merge.progress.records : 10000
yarn.nodemanager.aux-services.mapreduce_shuffle.class : org.apache.hadoop.mapred.ShuffleHandler
dfs.namenode.missing.checkpoint.periods.before.shutdown : 3
tfile.fs.output.buffer.size : 262144
dfs.client.failover.connection.retries : 0
fs.du.interval : 600000
dfs.edit.log.transfer.timeout : 30000
dfs.namenode.top.window.num.buckets : 10
dfs.data.transfer.server.tcpnodelay : true
hadoop.zk.retry-interval-ms : 1000
yarn.sharedcache.uploader.server.address : 0.0.0.0:8046
dfs.http.client.failover.max.attempts : 15
fs.s3a.socket.send.buffer : 8192
dfs.client.block.write.locateFollowingBlock.retries : 7
hadoop.registry.zk.quorum : localhost:2181
mapreduce.jvm.system-properties-to-log : os.name,os.version,java.home,java.runtime.version,java.vendor,java.version,java.vm.name,java.class.path,java.io.tmpdir,user.dir,user.name
hadoop.http.cross-origin.allowed-origins : *
dfs.namenode.enable.retrycache : true
dfs.datanode.du.reserved : 0
hadoop.registry.system.acls : sasl:yarn@, sasl:mapred@, sasl:hdfs@
yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidia-docker-v1.endpoint : http://localhost:3476/v1.0/docker/cli
mapreduce.job.encrypted-intermediate-data.buffer.kb : 128
dfs.data.transfer.client.tcpnodelay : true
yarn.resourcemanager.webapp.xfs-filter.xframe-options : SAMEORIGIN
mapreduce.admin.user.env : LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH
mapreduce.task.profile.reduce.params : ${mapreduce.task.profile.params}
mapreduce.reduce.memory.mb : 0
hadoop.caller.context.enabled : false
hadoop.http.authentication.kerberos.principal : HTTP/_HOST@LOCALHOST
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb : 0
dfs.qjournal.prepare-recovery.timeout.ms : 120000
dfs.datanode.transferTo.allowed : true
oozie.action.rootlogger.log.level : INFO
hadoop.security.sensitive-config-keys :
secret$
password$
ssl.keystore.pass$
fs.s3.*[Ss]ecret.?[Kk]ey
fs.s3a.*.server-side-encryption.key
fs.azure.account.key.*
credential$
oauth.*secret
oauth.*password
oauth.*token
hadoop.security.sensitive-config-keys
mapreduce.client.completion.pollinterval : 5000
dfs.namenode.name.dir.restore : false
dfs.namenode.full.block.report.lease.length.ms : 300000
dfs.namenode.secondary.http-address : 0.0.0.0:9868
hadoop.http.logs.enabled : true
hadoop.security.group.mapping.ldap.read.timeout.ms : 60000
yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory : 10
dfs.namenode.delegation.token.always-use : false
yarn.resourcemanager.webapp.https.address : masternode:8090
fs.s3a.retry.throttle.limit : ${fs.s3a.attempts.maximum}
dfs.client.read.striped.threadpool.size : 18
mapreduce.job.cache.limit.max-resources : 0
hadoop.proxyuser.HTTP.groups : *
--------------------
Setting up log4j2
log4j2 configuration file created at /yarn/nm/usercache/admin/appcache/application_1560674082717_0001/container_1560674082717_0001_01_000001/sqoop-log4j2.xml
Sqoop command arguments :
import
\
--connect
'jdbc:sqlserver://myServer;database=myDB'
\
--username
myUsername
--password
********
\
--table
category
-m
1
--check-column
LastEditOn
\
--merge-key
'Reference
ID'
\
--incremental
lastmodified
\
--compression-codec=snappy
\
--target-dir
/user/hive/warehouse/myDB.db/category
\
--hive-table
category
\
--map-column-hive
LastEditOn=timestamp,CreatedOn=timestamp
\
--fields-terminated-by
'\001'
--fields-terminated-by
'\n'
Fetching child yarn jobs
tag id : oozie-564a124254f1fd53cb03553181f7e603
No child applications found
=================================================================
>>> Invoking Sqoop command line now >>>
<<< Invocation of Sqoop command completed <<<
No child hadoop job is executed.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410)
at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55)
at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217)
at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
Caused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57)
at java.lang.Runtime.exit(Runtime.java:107)
at java.lang.System.exit(System.java:971)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:214)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:199)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:104)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:51)
... 16 more
Intercepting System.exit(1)
Failing Oozie Launcher, Main Class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
Oozie Launcher, uploading action data to HDFS sequence file: hdfs://NameServiceOne/user/admin/oozie-oozi/0000000-190616123600049-oozie-oozi-W/sqoop-c9e7--sqoop/action-data.seq
12:41:09.783 [main] INFO org.apache.hadoop.io.compress.CodecPool - Got brand-new compressor [.deflate]
Stopping AM
12:41:09.983 [main] INFO org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl - Waiting for application to be successfully unregistered.
Callback notification attempts left 0
Callback notification trying http://masternode:11000/oozie/callback?id=0000000-190616123600049-oozie-oozi-W@sqoop-c9e7&status=FAILED
Callback notification to http://masternode:11000/oozie/callback?id=0000000-190616123600049-oozie-oozi-W@sqoop-c9e7&status=FAILED succeeded
Callback notification succeeded
... View more
06-12-2019
01:39 AM
Hello Eric .... any solution for this?! I have version CHD 6.2 And I am trying to run it from Hue
... View more
05-29-2019
03:11 AM
Sorry, but where I can find the workflow.xml and the job.properties files? And the following is the Sqoop import command I am trying to execute: sqoop import \
--connect 'jdbc:sqlserver://myURL;database=myDB' \
--username user --password pass \
--table BigDataTest -m 1 --check-column lastmodified \
--merge-key id \
--incremental lastmodified \
--compression-codec=snappy \
--target-dir /user/hive/warehouse/dwh_db.db/bigdatatest \
--hive-table bigDataTest \
--map-column-java lastmodified=String \
--class-name BigDataTest \
--fields-terminated-by '\001' --fields-terminated-by '\n'
... View more
05-27-2019
04:03 AM
Hello, When I try to run a Sqoop import command through Hue, the job is KILLED always, and I get the following errors in the log: Caused by: java.lang.SecurityException: Intercepted System.exit(1)
at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57) then after that: Failing Oozie Launcher, Main Class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] any solution and explanation why this is happening?
... View more
05-27-2019
03:36 AM
Hello, when I try to run a sqoop import command I am getting the following error: Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=butmah, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x how to solve this ? tthanks
... View more
05-21-2019
10:54 PM
@denloe thank you. First: to answer you question, yes, the "cloudera-manager.list" is there in "/etc/apt/sources.list.d". Second: I have tried to install the Cloudera Manager Agent manually but I got the error of: sudo apt-get install cloudera-manager-agent cloudera-manager-daemons
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package cloudera-manager-agent
E: Unable to locate package cloudera-manager-daemons So I have decided to configure the repository on the host, manually, and from scratch; so: I have deleted the existing "cloudera-manager.list" sudo rm /etc/apt/sources.list.d/cloudera-manager.list Then copy the one that I have one my cloudera manager server. Then followed what is in "Step 1: Configure a Repository" wget https://archive.cloudera.com/cm6/6.2.0/ubuntu1604/apt/archive.key
sudo apt-key add archive.key
sudo apt-get update After that "Manually Install Cloudera Manager Agent Packages" sudo apt-get install cloudera-manager-agent cloudera-manager-daemons then add/modify the server_host name in "/etc/cloudera-scm-agent/config.ini" of the new host. And finally start the agent: sudo systemctl start cloudera-scm-agent And guess what! ... it is working now, and I could add the host to the cluster! I think there is something wrong with the installer of 6.2. And that wrong this is not just in installing the cloudera manager agent, but also in installing the Oracle JDK, which gives an error message that the Oracle JDK package does not exist! This Oracle JDK installation issue forced me to Manually Installing OpenJDK on the new hosts, and that caused another problem! Now I am having Oracle JDK 1.8 on my cloudera server master node, but "openjdk version 1.8.0_212" on the other nodes. And whenever I add a new host I got a warning that there is inconsistency in java and that will cause failures! .... now my question is how can I turn my cloudera server master node to "openjdk version 1.8.0_212"? is it just Manually Installing OpenJDK and this will take the place of the existing Oracle JDK 1.8? or I have to do cleanups before that, and more configurations after that?
... View more
05-21-2019
03:30 AM
Mr. Harsh, would you please have a look at my reply, please? thanks
... View more
05-21-2019
03:21 AM
Hello Laszlo Zeke, I think we have a similar case with 6.2 I have opened a question regarding but no one replied: https://community.cloudera.com/t5/Cloudera-Manager-Installation/Failed-to-complete-installation-on-host-XYZ/m-p/90690#M16705 would you please have a look on that add your comment as an expert. thanks
... View more
05-21-2019
03:17 AM
Hello GautamG, I think we have a similar issue here with 6.2. I have opened a question regarding that on the following link: https://community.cloudera.com/t5/Cloudera-Manager-Installation/Failed-to-complete-installation-on-host-XYZ/m-p/90690#M16705 would you please have a look on it? Thanks,
... View more
05-19-2019
09:10 PM
More information: The following is the "/tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.log" on the failing host: using SSH_CLIENT to get the SCM hostname: 10.4.34.22 37758 22 opening logging file descriptor ###CLOUDERA_SCM### SCRIPT_START ###CLOUDERA_SCM### TAKE_LOCK BEGIN flock 4 END (0) ###CLOUDERA_SCM### DETECT_ROOT effective UID is 1000 BEGIN which pbrun END (1) BEGIN sudo -S id uid=0(root) gid=0(root) groups=0(root) END (0) Using 'sudo ' to acquire root privileges ###CLOUDERA_SCM### DETECT_DISTRO BEGIN grep 'Ubuntu' /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS" END (0) BEGIN grep DISTRIB_CODENAME /etc/lsb-release DISTRIB_CODENAME=xenial END (0) BEGIN echo DISTRIB_CODENAME=xenial | cut -d = -f 2 xenial END (0) ###CLOUDERA_SCM### DETECT_SCM BEGIN host -t PTR 10.4.34.22 Host 22.34.4.10.in-addr.arpa. not found: 3(NXDOMAIN) END (1) BEGIN which python /usr/bin/python END (0) BEGIN python -c 'import socket; import sys; s = socket.socket(socket.AF_INET); s.settimeout(5.0); s.connect((sys.argv[1], int(sys.argv[2]))); s.close();' 10.4.34.22 7182 END (0) BEGIN which wget /usr/bin/wget END (0) BEGIN wget -qO- -T 1 -t 1 http://169.254.169.254/latest/meta-data/public-hostname && /bin/echo END (4) ###CLOUDERA_SCM### REPO_INSTALL Checking https://archive.cloudera.com/cm6/6.2.0/ubuntu1604/apt/dists/ Checking https://archive.cloudera.com/cm6/6.2.0/dists/ Using installing repository file /tmp/scm_prepare_node.vQZe0yDf/repos/ubuntu_xenial/cloudera-manager.list repository file /tmp/scm_prepare_node.vQZe0yDf/repos/ubuntu_xenial/cloudera-manager.list installed installing apt keys BEGIN sudo apt-key add /tmp/scm_prepare_node.vQZe0yDf/customGPG OK END (0) installing priority file /tmp/scm_prepare_node.vQZe0yDf/ubuntu_xenial priority file /tmp/scm_prepare_node.vQZe0yDf/ubuntu_xenial installed ###CLOUDERA_SCM### REFRESH_METADATA BEGIN sudo apt-get update Hit:1 http://security.ubuntu.com/ubuntu xenial-security InRelease Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease Hit:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease Reading package lists... END (0) BEGIN sudo apt-get update Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease Hit:2 http://security.ubuntu.com/ubuntu xenial-security InRelease Hit:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease Reading package lists... END (0) ###CLOUDERA_SCM### PACKAGE_INSTALL cloudera-manager-agent BEGIN sudo dpkg -l cloudera-manager-agent | grep -E '^ii[[:space:]]*cloudera-manager-agent[[:space:]]*' dpkg-query: no packages found matching cloudera-manager-agent END (1) BEGIN sudo apt-cache show cloudera-manager-agent E: No packages found END (100) cloudera-manager-agent must have Version=6.2.0 and Build=968826, exiting closing logging file descriptor and this is what I have under "/tmp/scm_prepare_node.vQZe0yDf/" folder:
... View more
05-19-2019
09:07 PM
Mr. Harsh, please find the following as requested : using SSH_CLIENT to get the SCM hostname: 10.4.34.22 37758 22 opening logging file descriptor ###CLOUDERA_SCM### SCRIPT_START ###CLOUDERA_SCM### TAKE_LOCK BEGIN flock 4 END (0) ###CLOUDERA_SCM### DETECT_ROOT effective UID is 1000 BEGIN which pbrun END (1) BEGIN sudo -S id uid=0(root) gid=0(root) groups=0(root) END (0) Using 'sudo ' to acquire root privileges ###CLOUDERA_SCM### DETECT_DISTRO BEGIN grep 'Ubuntu' /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS" END (0) BEGIN grep DISTRIB_CODENAME /etc/lsb-release DISTRIB_CODENAME=xenial END (0) BEGIN echo DISTRIB_CODENAME=xenial | cut -d = -f 2 xenial END (0) ###CLOUDERA_SCM### DETECT_SCM BEGIN host -t PTR 10.4.34.22 Host 22.34.4.10.in-addr.arpa. not found: 3(NXDOMAIN) END (1) BEGIN which python /usr/bin/python END (0) BEGIN python -c 'import socket; import sys; s = socket.socket(socket.AF_INET); s.settimeout(5.0); s.connect((sys.argv[1], int(sys.argv[2]))); s.close();' 10.4.34.22 7182 END (0) BEGIN which wget /usr/bin/wget END (0) BEGIN wget -qO- -T 1 -t 1 http://169.254.169.254/latest/meta-data/public-hostname && /bin/echo END (4) ###CLOUDERA_SCM### REPO_INSTALL Checking https://archive.cloudera.com/cm6/6.2.0/ubuntu1604/apt/dists/ Checking https://archive.cloudera.com/cm6/6.2.0/dists/ Using installing repository file /tmp/scm_prepare_node.vQZe0yDf/repos/ubuntu_xenial/cloudera-manager.list repository file /tmp/scm_prepare_node.vQZe0yDf/repos/ubuntu_xenial/cloudera-manager.list installed installing apt keys BEGIN sudo apt-key add /tmp/scm_prepare_node.vQZe0yDf/customGPG OK END (0) installing priority file /tmp/scm_prepare_node.vQZe0yDf/ubuntu_xenial priority file /tmp/scm_prepare_node.vQZe0yDf/ubuntu_xenial installed ###CLOUDERA_SCM### REFRESH_METADATA BEGIN sudo apt-get update Hit:1 http://security.ubuntu.com/ubuntu xenial-security InRelease Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease Hit:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease Reading package lists... END (0) BEGIN sudo apt-get update Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease Hit:2 http://security.ubuntu.com/ubuntu xenial-security InRelease Hit:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease Hit:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease Reading package lists... END (0) ###CLOUDERA_SCM### PACKAGE_INSTALL cloudera-manager-agent BEGIN sudo dpkg -l cloudera-manager-agent | grep -E '^ii[[:space:]]*cloudera-manager-agent[[:space:]]*' dpkg-query: no packages found matching cloudera-manager-agent END (1) BEGIN sudo apt-cache show cloudera-manager-agent E: No packages found END (100) cloudera-manager-agent must have Version=6.2.0 and Build=968826, exiting closing logging file descriptor and this is what I have under "/tmp/scm_prepare_node.vQZe0yDf/" folder:
... View more
05-19-2019
12:33 AM
Hello Harsh J, I am having similler or same issue. Trying to add a new host ...actually it is the first host (after the CM) to a new cluster. But I am ending with this error "Command 30(GlobalHostInstall) has completed. finalstate:FINISHED, success:false, msg:Failed to complete installation" The following is the "/var/log/cloudera-scm-server/cloudera-scm-server.log" since I started the process. 10:58:25.348 AM INFO ServiceHandlerRegistry Executing command GlobalHostInstall GlobalHostInstallCommandArgs{sshPort=22, userName=butmah, password=REDACTED, passphrase=REDACTED, privateKey=REDACTED, parallelInstallCount=10, cmRepoUrl=null, gpgKeyCustomUrl=null, gpgKeyOverrideBundle=<none>, unlimitedJCE=false, javaInstallStrategy=NONE, agentUserMode=ROOT, cdhVersion=-1, cdhRelease=NONE, cdhRepoUrl=null, buildCertCommand=, sslCertHostname=null, reqId=25, skipPackageInstall=false, skipCloudConfig=false, proxyProtocol=HTTP, proxyServer=10.4.32.3, proxyPort=8080, proxyUserName=null, proxyPassword=REDACTED, hosts=[node1, node2, node3, node4, node5], existingHosts=[]}.
10:58:25.349 AM INFO CmdStep Executing command work: Execute 1 steps in sequence
10:58:25.349 AM INFO CmdStep Executing command work: Install on 1 hosts.
10:58:25.349 AM INFO CmdStep Executing command work: Install on node5.
10:58:25.350 AM INFO NodeConfiguratorService Adding password-based configurator for node5
10:58:25.350 AM INFO NodeConfiguratorService Submitted configurator for node5 with id 30
10:58:25.357 AM INFO NodeConfiguratorProgress node5: Transitioning from INIT (PT0.008S) to CONNECT
10:58:25.359 AM INFO TransportImpl Client identity string: SSH-2.0-SSHJ_0_14_0
10:58:25.361 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installretry.json, Status:200
10:58:25.368 AM INFO TransportImpl Server identity string: SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.8
10:58:25.422 AM INFO NodeConfiguratorProgress node5: Transitioning from CONNECT (PT0.065S) to AUTHENTICATE
10:58:25.474 AM INFO NodeConfiguratorProgress node5: Transitioning from AUTHENTICATE (PT0.052S) to MAKE_TEMP_DIR
10:58:25.616 AM INFO NodeConfigurator Executing mktemp -d /tmp/scm_prepare_node.XXXXXXXX on node5
10:58:25.620 AM INFO NodeConfiguratorProgress node5: Transitioning from MAKE_TEMP_DIR (PT0.146S) to COPY_FILES
10:58:25.719 AM INFO NodeConfigurator Using key bundle from URL: https://archive.cloudera.com/cm6/6.2.0/allkeys.asc
10:58:26.063 AM INFO NodeConfiguratorProgress node5: Transitioning from COPY_FILES (PT0.443S) to CHMOD
10:58:26.068 AM INFO NodeConfigurator Executing chmod a+x /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.sh on node5
10:58:26.074 AM INFO NodeConfiguratorProgress node5: Transitioning from CHMOD (PT0.011S) to EXECUTE_SCRIPT
10:58:26.142 AM INFO NodeConfigurator Executing bash -c 'bash /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.sh --server_version 6.2.0 --server_build 968826 --packages /tmp/scm_prepare_node.vQZe0yDf/packages.scm --always /tmp/scm_prepare_node.vQZe0yDf/always_install.scm --x86_64 /tmp/scm_prepare_node.vQZe0yDf/x86_64_packages.scm --certtar /tmp/scm_prepare_node.vQZe0yDf/cert.tar --unlimitedJCE false --javaInstallStrategy NONE --agentUserMode ROOT --cm https://archive.cloudera.com/cm6/6.2.0 --skipCloudConfig false | tee /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.log; exit ${PIPESTATUS[0]}' on node5
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from EXECUTE_SCRIPT (PT1.071S) to SCRIPT_START
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from SCRIPT_START (PT0S) to TAKE_LOCK
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from TAKE_LOCK (PT0S) to DETECT_ROOT
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from DETECT_ROOT (PT0S) to DETECT_DISTRO
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from DETECT_DISTRO (PT0S) to DETECT_SCM
10:58:28.146 AM INFO NodeConfiguratorProgress node5: Transitioning from DETECT_SCM (PT1.001S) to REPO_INSTALL
10:58:28.146 AM INFO NodeConfiguratorProgress node5: Transitioning from REPO_INSTALL (PT0S) to REFRESH_METADATA
10:58:30.354 AM INFO JavaMelodyFacade Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json
10:58:30.355 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json, Status:200
10:58:32.079 AM INFO AgentAvroServlet (3 skipped) AgentAvroServlet: heartbeat processing stats: average=2ms, min=1ms, max=34ms.
10:58:32.150 AM INFO NodeConfiguratorProgress node5: Transitioning from REFRESH_METADATA (PT4.004S) to PACKAGE_INSTALL cloudera-manager-agent
10:58:32.160 AM WARN NodeConfigurator Command bash -c 'bash /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.sh --server_version 6.2.0 --server_build 968826 --packages /tmp/scm_prepare_node.vQZe0yDf/packages.scm --always /tmp/scm_prepare_node.vQZe0yDf/always_install.scm --x86_64 /tmp/scm_prepare_node.vQZe0yDf/x86_64_packages.scm --certtar /tmp/scm_prepare_node.vQZe0yDf/cert.tar --unlimitedJCE false --javaInstallStrategy NONE --agentUserMode ROOT --cm https://archive.cloudera.com/cm6/6.2.0 --skipCloudConfig false | tee /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.log; exit ${PIPESTATUS[0]}' on node5 finished with exit status 1
10:58:32.160 AM INFO NodeConfiguratorProgress node5: Setting PACKAGE_INSTALL cloudera-manager-agent as failed and done state
10:58:32.160 AM INFO TransportImpl Disconnected - BY_APPLICATION
10:58:35.363 AM INFO JavaMelodyFacade Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json
10:58:35.364 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json, Status:200
10:58:35.369 AM INFO JavaMelodyFacade Entering HTTP Operation: Method:POST, Path:/express-wizard/updateHostsState
10:58:35.370 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/express-wizard/updateHostsState, Status:200
10:58:35.375 AM ERROR WorkOutputs CMD id: 30 Failed to complete installation on host node5.
10:58:35.375 AM ERROR DbCommand Command 30(GlobalHostInstall) has completed. finalstate:FINISHED, success:false, msg:Failed to complete installation.
... View more
05-19-2019
12:12 AM
I am tring to add a new host. but it fails with this message at the end " Command 30(GlobalHostInstall) has completed. finalstate:FINISHED, success:false, msg:Failed to complete installation. " the following is the "/var/log/cloudera-scm-server/cloudera-scm-server.log" 10:58:25.348 AM INFO ServiceHandlerRegistry Executing command GlobalHostInstall GlobalHostInstallCommandArgs{sshPort=22, userName=butmah, password=REDACTED, passphrase=REDACTED, privateKey=REDACTED, parallelInstallCount=10, cmRepoUrl=null, gpgKeyCustomUrl=null, gpgKeyOverrideBundle=<none>, unlimitedJCE=false, javaInstallStrategy=NONE, agentUserMode=ROOT, cdhVersion=-1, cdhRelease=NONE, cdhRepoUrl=null, buildCertCommand=, sslCertHostname=null, reqId=25, skipPackageInstall=false, skipCloudConfig=false, proxyProtocol=HTTP, proxyServer=10.4.32.3, proxyPort=8080, proxyUserName=null, proxyPassword=REDACTED, hosts=[node1, node2, node3, node4, node5], existingHosts=[]}.
10:58:25.349 AM INFO CmdStep Executing command work: Execute 1 steps in sequence
10:58:25.349 AM INFO CmdStep Executing command work: Install on 1 hosts.
10:58:25.349 AM INFO CmdStep Executing command work: Install on node5.
10:58:25.350 AM INFO NodeConfiguratorService Adding password-based configurator for node5
10:58:25.350 AM INFO NodeConfiguratorService Submitted configurator for node5 with id 30
10:58:25.357 AM INFO NodeConfiguratorProgress node5: Transitioning from INIT (PT0.008S) to CONNECT
10:58:25.359 AM INFO TransportImpl Client identity string: SSH-2.0-SSHJ_0_14_0
10:58:25.361 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installretry.json, Status:200
10:58:25.368 AM INFO TransportImpl Server identity string: SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.8
10:58:25.422 AM INFO NodeConfiguratorProgress node5: Transitioning from CONNECT (PT0.065S) to AUTHENTICATE
10:58:25.474 AM INFO NodeConfiguratorProgress node5: Transitioning from AUTHENTICATE (PT0.052S) to MAKE_TEMP_DIR
10:58:25.616 AM INFO NodeConfigurator Executing mktemp -d /tmp/scm_prepare_node.XXXXXXXX on node5
10:58:25.620 AM INFO NodeConfiguratorProgress node5: Transitioning from MAKE_TEMP_DIR (PT0.146S) to COPY_FILES
10:58:25.719 AM INFO NodeConfigurator Using key bundle from URL: https://archive.cloudera.com/cm6/6.2.0/allkeys.asc
10:58:26.063 AM INFO NodeConfiguratorProgress node5: Transitioning from COPY_FILES (PT0.443S) to CHMOD
10:58:26.068 AM INFO NodeConfigurator Executing chmod a+x /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.sh on node5
10:58:26.074 AM INFO NodeConfiguratorProgress node5: Transitioning from CHMOD (PT0.011S) to EXECUTE_SCRIPT
10:58:26.142 AM INFO NodeConfigurator Executing bash -c 'bash /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.sh --server_version 6.2.0 --server_build 968826 --packages /tmp/scm_prepare_node.vQZe0yDf/packages.scm --always /tmp/scm_prepare_node.vQZe0yDf/always_install.scm --x86_64 /tmp/scm_prepare_node.vQZe0yDf/x86_64_packages.scm --certtar /tmp/scm_prepare_node.vQZe0yDf/cert.tar --unlimitedJCE false --javaInstallStrategy NONE --agentUserMode ROOT --cm https://archive.cloudera.com/cm6/6.2.0 --skipCloudConfig false | tee /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.log; exit ${PIPESTATUS[0]}' on node5
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from EXECUTE_SCRIPT (PT1.071S) to SCRIPT_START
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from SCRIPT_START (PT0S) to TAKE_LOCK
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from TAKE_LOCK (PT0S) to DETECT_ROOT
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from DETECT_ROOT (PT0S) to DETECT_DISTRO
10:58:27.145 AM INFO NodeConfiguratorProgress node5: Transitioning from DETECT_DISTRO (PT0S) to DETECT_SCM
10:58:28.146 AM INFO NodeConfiguratorProgress node5: Transitioning from DETECT_SCM (PT1.001S) to REPO_INSTALL
10:58:28.146 AM INFO NodeConfiguratorProgress node5: Transitioning from REPO_INSTALL (PT0S) to REFRESH_METADATA
10:58:30.354 AM INFO JavaMelodyFacade Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json
10:58:30.355 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json, Status:200
10:58:32.079 AM INFO AgentAvroServlet (3 skipped) AgentAvroServlet: heartbeat processing stats: average=2ms, min=1ms, max=34ms.
10:58:32.150 AM INFO NodeConfiguratorProgress node5: Transitioning from REFRESH_METADATA (PT4.004S) to PACKAGE_INSTALL cloudera-manager-agent
10:58:32.160 AM WARN NodeConfigurator Command bash -c 'bash /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.sh --server_version 6.2.0 --server_build 968826 --packages /tmp/scm_prepare_node.vQZe0yDf/packages.scm --always /tmp/scm_prepare_node.vQZe0yDf/always_install.scm --x86_64 /tmp/scm_prepare_node.vQZe0yDf/x86_64_packages.scm --certtar /tmp/scm_prepare_node.vQZe0yDf/cert.tar --unlimitedJCE false --javaInstallStrategy NONE --agentUserMode ROOT --cm https://archive.cloudera.com/cm6/6.2.0 --skipCloudConfig false | tee /tmp/scm_prepare_node.vQZe0yDf/scm_prepare_node.log; exit ${PIPESTATUS[0]}' on node5 finished with exit status 1
10:58:32.160 AM INFO NodeConfiguratorProgress node5: Setting PACKAGE_INSTALL cloudera-manager-agent as failed and done state
10:58:32.160 AM INFO TransportImpl Disconnected - BY_APPLICATION
10:58:35.363 AM INFO JavaMelodyFacade Entering HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json
10:58:35.364 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/add-hosts-wizard/installprogressdata.json, Status:200
10:58:35.369 AM INFO JavaMelodyFacade Entering HTTP Operation: Method:POST, Path:/express-wizard/updateHostsState
10:58:35.370 AM INFO JavaMelodyFacade Exiting HTTP Operation: Method:POST, Path:/express-wizard/updateHostsState, Status:200
10:58:35.375 AM ERROR WorkOutputs CMD id: 30 Failed to complete installation on host node5.
10:58:35.375 AM ERROR DbCommand Command 30(GlobalHostInstall) has completed. finalstate:FINISHED, success:false, msg:Failed to complete installation.
... View more
Labels:
05-15-2019
10:50 PM
I have changed this line in my.cnf: socket=/var/lib/mysql/mysql.sock to this socket=/var/run/mysqld/mysql.sock and that fixed it! I think Cloudera has to review its installation guide ... I have faced many difficulties so far ... They tell you to do A and B to get C ... you do A and B but you don't get C!! ...
... View more
05-15-2019
10:55 AM
First I would like to note that with the default my.cnf that comes with the fresh installation of MySQL the login works fine. Now to answer your question, this is what I have pasted in “my.cnf” as recommended by Cloudera guide, and I cannot see any bind-address option: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock transaction-isolation = READ-COMMITTED # Disabling symbolic-links is recommended to prevent assorted security risks; # to do so, uncomment this line: symbolic-links = 0 key_buffer_size = 32M max_allowed_packet = 32M thread_stack = 256K thread_cache_size = 64 query_cache_limit = 8M query_cache_size = 64M query_cache_type = 1 max_connections = 550 #expire_logs_days = 10 #max_binlog_size = 100M #log_bin should be on a disk with enough free space. #Replace '/var/lib/mysql/mysql_binary_log' with an appropriate path for your #system and chown the specified folder to the mysql user. log_bin=/var/lib/mysql/mysql_binary_log #In later versions of MySQL, if you enable the binary log and do not set #a server_id, MySQL will not start. The server_id must be unique within #the replicating group. server_id=1 binlog_format = mixed read_buffer_size = 2M read_rnd_buffer_size = 16M sort_buffer_size = 8M join_buffer_size = 8M # InnoDB settings innodb_file_per_table = 1 innodb_flush_log_at_trx_commit = 2 innodb_log_buffer_size = 64M innodb_buffer_pool_size = 4G innodb_thread_concurrency = 8 innodb_flush_method = O_DIRECT innodb_log_file_size = 512M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid sql_mode=STRICT_ALL_TABLES
... View more
05-15-2019
05:08 AM
After install mysql as per this cloudera guide , which includes changing the "/etc/mysql/my.cnf" to the one they recommend. Now when I try : sudo mysql -u root -p I get Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I need to do it as follows to be able to connect to mysql: mysql -u root -p -h 127.0.0.1 by the way I have tried the following to solve it but with no luck .... same error sudo mkdir -p /var/run/mysqld/
sudo touch /var/run/mysqld/mysqld.pid
sudo touch /var/run/mysqld/mysqld.sock
sudo chown mysql:mysql -R /var/run/mysqld
sudo service mysql restart
... View more
Labels:
05-14-2019
10:34 AM
For apt list oracle-j2sdk1.8 I got tmah@masternode:~$ apt list oracle-j2sdk1.8
Listing... Done
oracle-j2sdk1.8/unknown,now 1.8.0+update181-1 amd64 [installed] And for dpkg-query -L oracle-j2sdk1.8 I got a lot! The following are the last 20 lines ....
....
....
/usr/lib/jvm/java-8-oracle-cloudera/include
/usr/lib/jvm/java-8-oracle-cloudera/include/jdwpTransport.h
/usr/lib/jvm/java-8-oracle-cloudera/include/linux
/usr/lib/jvm/java-8-oracle-cloudera/include/linux/jawt_md.h
/usr/lib/jvm/java-8-oracle-cloudera/include/linux/jni_md.h
/usr/lib/jvm/java-8-oracle-cloudera/include/classfile_constants.h
/usr/lib/jvm/java-8-oracle-cloudera/include/jni.h
/usr/lib/jvm/java-8-oracle-cloudera/include/jvmticmlr.h
/usr/lib/jvm/java-8-oracle-cloudera/include/jawt.h
/usr/lib/jvm/java-8-oracle-cloudera/include/jvmti.h
/usr/lib/jvm/java-8-oracle-cloudera/THIRDPARTYLICENSEREADME-JAVAFX.txt
/usr/lib/jvm/java-8-oracle-cloudera/LICENSE
/usr/lib/jvm/java-8-oracle-cloudera/release
/usr/share
/usr/share/doc
/usr/share/doc/oracle-j2sdk1.8
/usr/share/doc/oracle-j2sdk1.8/changelog.gz
/usr/lib/jvm/java-8-oracle-cloudera/jre/bin/ControlPanel
/usr/lib/jvm/java-8-oracle-cloudera/jre/lib/amd64/server/libjsig.so
/usr/lib/jvm/java-8-oracle-cloudera/bin/ControlPanel
/usr/lib/jvm/java-8-oracle-cloudera/man/ja Thanks
... View more
05-14-2019
05:31 AM
OK I have chosen to install Cloudera 6.2 from scratch. Now when I run sudo apt-get install oracle-j2sdk1.8 it completes successfully.... however after it finishes I can't find any java folder under /usr !! And when I move to the next step of enabling auto-TLS, and run sudo JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera /opt/cloudera/cm-agent/bin/certmanager setup --configure-services I get the following exception Exception: Cannot identify a valid keytool: /usr/java/jdk1.8.0_181-cloudera/bin/keytool and when i "java --version" i get The program 'java' can be found in the following packages:
* default-jre
* gcj-5-jre-headless
* openjdk-8-jre-headless
* gcj-4.8-jre-headless
* gcj-4.9-jre-headless
* openjdk-9-jre-headless
Try: sudo apt install <selected package>
... View more
05-13-2019
06:55 AM
Great! Now why I am still seeing java 1.7 in the Cloudera Manager and when “Java -version”. By the way I am using the Cloudera quickstart vm. And I am doubting that there is something that forces java 1.7 and even reset JAVA_HOME to the 1.7 folder on every restart ... am I right? What to do next?
... View more
05-13-2019
05:03 AM
"yum install java-1.8.0-openjdk-devel" [root@quickstart cloudera]# yum install java-1.8.0-openjdk-devel
Loaded plugins: fastestmirror, security
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: centos.activecloud.co.il
* epel: epel.scopesky.iq
* extras: centos.interhost.net.il
* updates: centos.activecloud.co.il
Resolving Dependencies
--> Running transaction check
---> Package java-1.8.0-openjdk-devel.x86_64 1:1.8.0.212.b04-0.el6_10 will be installed
--> Processing Dependency: java-1.8.0-openjdk = 1:1.8.0.212.b04-0.el6_10 for package: 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: libawt_xawt.so(SUNWprivate_1.1)(64bit) for package: 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: libjvm.so()(64bit) for package: 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: libjava.so()(64bit) for package: 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: libawt_xawt.so()(64bit) for package: 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: libawt.so()(64bit) for package: 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64
--> Running transaction check
---> Package java-1.8.0-openjdk.x86_64 1:1.8.0.212.b04-0.el6_10 will be installed
--> Processing Dependency: xorg-x11-fonts-Type1 for package: 1:java-1.8.0-openjdk-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: libgif.so.4()(64bit) for package: 1:java-1.8.0-openjdk-1.8.0.212.b04-0.el6_10.x86_64
---> Package java-1.8.0-openjdk-headless.x86_64 1:1.8.0.212.b04-0.el6_10 will be installed
--> Processing Dependency: tzdata-java >= 2014f-1 for package: 1:java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: pcsc-lite-libs(x86-64) for package: 1:java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: lksctp-tools(x86-64) for package: 1:java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64
--> Processing Dependency: jpackage-utils for package: 1:java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64
--> Running transaction check
---> Package giflib.x86_64 0:4.1.6-3.1.el6 will be installed
---> Package jpackage-utils.noarch 0:1.7.5-3.16.el6 will be installed
---> Package lksctp-tools.x86_64 0:1.0.10-7.el6 will be installed
---> Package pcsc-lite-libs.x86_64 0:1.5.2-16.el6 will be installed
---> Package tzdata-java.noarch 0:2019a-1.el6 will be installed
---> Package xorg-x11-fonts-Type1.noarch 0:7.2-11.el6 will be installed
--> Processing Dependency: ttmkfdir for package: xorg-x11-fonts-Type1-7.2-11.el6.noarch
--> Processing Dependency: ttmkfdir for package: xorg-x11-fonts-Type1-7.2-11.el6.noarch
--> Running transaction check
---> Package ttmkfdir.x86_64 0:3.0.9-32.1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==================================================================================================================
Package Arch Version Repository Size
==================================================================================================================
Installing:
java-1.8.0-openjdk-devel x86_64 1:1.8.0.212.b04-0.el6_10 updates 10 M
Installing for dependencies:
giflib x86_64 4.1.6-3.1.el6 base 37 k
java-1.8.0-openjdk x86_64 1:1.8.0.212.b04-0.el6_10 updates 228 k
java-1.8.0-openjdk-headless x86_64 1:1.8.0.212.b04-0.el6_10 updates 32 M
jpackage-utils noarch 1.7.5-3.16.el6 base 60 k
lksctp-tools x86_64 1.0.10-7.el6 base 79 k
pcsc-lite-libs x86_64 1.5.2-16.el6 base 28 k
ttmkfdir x86_64 3.0.9-32.1.el6 base 43 k
tzdata-java noarch 2019a-1.el6 updates 188 k
xorg-x11-fonts-Type1 noarch 7.2-11.el6 base 520 k
Transaction Summary
==================================================================================================================
Install 10 Package(s)
Total download size: 43 M
Installed size: 146 M
Is this ok [y/N]: y
Downloading Packages:
http://centos.activecloud.co.il/6.10/os/x86_64/Packages/giflib-4.1.6-3.1.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 502 Bad Gateway"
Trying other mirror.
(1/10): giflib-4.1.6-3.1.el6.x86_64.rpm | 37 kB 00:00
(2/10): java-1.8.0-openjdk-1.8.0.212.b04-0.el6_10.x86_64.rpm | 228 kB 00:00
(3/10): java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64.rpm | 10 MB 00:03
(4/10): java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64.rpm | 32 MB 00:11
(5/10): jpackage-utils-1.7.5-3.16.el6.noarch.rpm | 60 kB 00:00
(6/10): lksctp-tools-1.0.10-7.el6.x86_64.rpm | 79 kB 00:00
(7/10): pcsc-lite-libs-1.5.2-16.el6.x86_64.rpm | 28 kB 00:00
(8/10): ttmkfdir-3.0.9-32.1.el6.x86_64.rpm | 43 kB 00:00
(9/10): tzdata-java-2019a-1.el6.noarch.rpm | 188 kB 00:00
(10/10): xorg-x11-fonts-Type1-7.2-11.el6.noarch.rpm | 520 kB 00:00
------------------------------------------------------------------------------------------------------------------
Total 1.9 MB/s | 43 MB 00:22
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : jpackage-utils-1.7.5-3.16.el6.noarch 1/10
Installing : giflib-4.1.6-3.1.el6.x86_64 2/10
Installing : lksctp-tools-1.0.10-7.el6.x86_64 3/10
Installing : pcsc-lite-libs-1.5.2-16.el6.x86_64 4/10
Installing : tzdata-java-2019a-1.el6.noarch 5/10
Installing : 1:java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64 6/10
Installing : ttmkfdir-3.0.9-32.1.el6.x86_64 7/10
Installing : xorg-x11-fonts-Type1-7.2-11.el6.noarch 8/10
Installing : 1:java-1.8.0-openjdk-1.8.0.212.b04-0.el6_10.x86_64 9/10
Installing : 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64 10/10
Verifying : ttmkfdir-3.0.9-32.1.el6.x86_64 1/10
Verifying : 1:java-1.8.0-openjdk-headless-1.8.0.212.b04-0.el6_10.x86_64 2/10
Verifying : tzdata-java-2019a-1.el6.noarch 3/10
Verifying : 1:java-1.8.0-openjdk-devel-1.8.0.212.b04-0.el6_10.x86_64 4/10
Verifying : 1:java-1.8.0-openjdk-1.8.0.212.b04-0.el6_10.x86_64 5/10
Verifying : pcsc-lite-libs-1.5.2-16.el6.x86_64 6/10
Verifying : xorg-x11-fonts-Type1-7.2-11.el6.noarch 7/10
Verifying : lksctp-tools-1.0.10-7.el6.x86_64 8/10
Verifying : giflib-4.1.6-3.1.el6.x86_64 9/10
Verifying : jpackage-utils-1.7.5-3.16.el6.noarch 10/10
Installed:
java-1.8.0-openjdk-devel.x86_64 1:1.8.0.212.b04-0.el6_10
Dependency Installed:
giflib.x86_64 0:4.1.6-3.1.el6 java-1.8.0-openjdk.x86_64 1:1.8.0.212.b04-0.el6_10
java-1.8.0-openjdk-headless.x86_64 1:1.8.0.212.b04-0.el6_10 jpackage-utils.noarch 0:1.7.5-3.16.el6
lksctp-tools.x86_64 0:1.0.10-7.el6 pcsc-lite-libs.x86_64 0:1.5.2-16.el6
ttmkfdir.x86_64 0:3.0.9-32.1.el6 tzdata-java.noarch 0:2019a-1.el6
xorg-x11-fonts-Type1.noarch 0:7.2-11.el6
Complete!
... View more
05-13-2019
04:16 AM
by mistake I have deleted "cloudera-cdh5.repo". did I do a catastrophe ?! 😞 Can this be fixed? and still I am getting the same error. even after: wget https://archive.cloudera.com/cm5/redhat/6/x86_64/cm/cloudera-manager.repo -P /etc/yum.repos.d/ then rpm --import https://archive.cloudera.com/cm5/redhat/6/x86_64/cm/RPM-GPG-KEY-cloudera then Install Java Development Kit. And if you ask, when I do "cat /etc/yum.repos.d/cloudera-manager.repo", I get
... View more
05-13-2019
03:12 AM
I want to upgrade from 5.16 to 6.2 and based on this https://www.cloudera.com/documentation/enterprise/upgrade/topics/ug_cm_upgrade_server.html#cm_install_jdk I need to upgrade to Oracle-j2sdk1.8 from 1.7 first before I can start the major upgrade. Now, do I have to change anything or just follow your instructions: "As first step please add the CM repository file as shown in documentation chapter . Then install Oracle JDK as shown here ." thanks 🙂
... View more
05-13-2019
02:59 AM
this is the result of cat /etc/yum.repos.d/cloudera-cdh5.repo Please help ... I need to upgrade to CHD6 Thanks
... View more
05-13-2019
01:16 AM
I have CHD 5.16 and I don't have this ' /etc/yum.repos.d/cloudera.repo ' if I 'ls /etc/yum.repos.d/cloudera-cdh5.repo' I get '/etc/yum.repos.d/cloudera-cdh5.repo' I am getting the same error 'No package oracle-j2sdk1.8.x86_64 available.' although I did ' yum clean all ' I can run yum with no connectivity problem ... I mean I can upgrade other things on the centos What to do! '
... View more
05-09-2019
03:07 AM
I run the following sqoop merge import
sudo sqoop import \
--connect 'jdbc:sqlserver://1.1.1.1\test_server;database=Training' \
--username Training_user --password Training_user \
--table BigDataTest -m 1 \
--check-column lastmodified \
--merge-key id \
--incremental lastmodified \
--compression-codec=snappy \
--as-parquetfile \
--target-dir /user/hive/warehouse \
--hive-table bigDataTest \
--last-value '2019-05-06 15:07:49.917'
I get this
. . . 19/05/09 11:00:50 INFO tool.ImportTool: Final destination exists, will run merge job.
19/05/09 11:00:50 ERROR tool.ImportTool: Import failed: java.io.IOException: Could not load jar /tmp/sqoop-root/compile/e913f7c459cf4e1cdb8a8d5802f1dac2/codegen_BigDataTest.jar into JVM. (Cou ld not find class BigDataTest.)
at org.apache.sqoop.util.ClassLoaderStack.addJarFile(ClassLoaderStack.java:92)
at com.cloudera.sqoop.util.ClassLoaderStack.addJarFile(ClassLoaderStack.java:36)
at org.apache.sqoop.tool.ImportTool.loadJars(ImportTool.java:120)
at org.apache.sqoop.tool.ImportTool.lastModifiedMerge(ImportTool.java:456)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:522)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:621)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: java.lang.ClassNotFoundException: BigDataTest
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.sqoop.util.ClassLoaderStack.addJarFile(ClassLoaderStack.java:88)
... 11 more
why? and how I can fix it ?
... View more
05-05-2019
12:56 AM
I am running the following query on the same data (same tables, and the same number of records on those tables), but it gives a different result on Impala than what I get on SQL Server. The following on Impala: and this is on SQL Server: I am very sure that the data is the same in everything ... actually, I have imported the data from SQL Server through Sqoop and after that made sure that the number of records is the same in the source and destination .... yet I don't know why I'm getting a defferent result here and there ?!!!
... View more
Labels:
- Labels:
-
Impala