Member since
08-13-2019
84
Posts
233
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1973 | 02-28-2018 09:27 PM | |
3049 | 01-25-2018 09:44 PM | |
5930 | 09-21-2017 08:17 PM | |
3439 | 09-11-2017 05:21 PM | |
3126 | 07-13-2017 04:56 PM |
04-03-2017
06:47 PM
1 Kudo
@tuxnet I dont know if you have a specific need to use IDE only, but have you given a try to Zeppelin ? zeppelin-server is running on your cluster itself and you can access it via browser. You can submit your spark jobs through either %livy or %spark interpreters. %livy also provides additional features such as session timeout, impersonation etc.
... View more
04-03-2017
06:35 PM
1 Kudo
@Bhavin Tandel What exception do you see in livy server logs ?
... View more
03-31-2017
06:13 PM
10 Kudos
@ramya This feature means the external URL and not any other zeppelin instance. There may be many failure points like other zeppelin instance might have authentication enabled or firewalls etc. If you wish to import from another zeppelin instance, then you can export the notebook from there as json and import that notebook in this zeppelin instance as json.
... View more
03-30-2017
08:41 PM
1 Kudo
@heta desai This slide deck explains you the spark internals in very simple way https://spark-summit.org/2014/wp-content/uploads/2014/07/A-Deeper-Understanding-of-Spark-Internals-Aaron-Davidson.pdf Based on this , what i think is that when you do order by - first , data in each partition will be ordered first. And then to achieve universal order, the ordering among partitions would be carried out. Spark won't accumulate all data at one place because thats not possible if data is huge. Spark would try to perform all operations in memory. Corresponding Stack overflow answer: http://stackoverflow.com/questions/32887595/how-does-spark-achieve-sort-order
... View more
03-30-2017
02:21 PM
5 Kudos
@Predrag Minovic I tried with following settings. 1) Install python3.5 on all my cluster nodes (I have a centos7 based cluster, and I used these instructions : https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-centos-7) [root@ctr-XXXX ~]# which python3.5
/usr/bin/python3.5
[root@ctr-XXXX~]# python3.5 --version
Python 3.5.3
2) In zeppelin-env.sh I added this property export PYSPARK_PYTHON = /usr/bin/python3.5 3) Modified my zeppelin spark interpreter from GUI After that If I run following paragraph, it prints python 3.5.3 as its current version
... View more
03-23-2017
06:19 PM
6 Kudos
I have a secured cluster with HBase, phoenix and zeppelin installed. I have configured my jdbc(phoenix) interpreter as follows phoenix.password phoenix.url jdbc:phoenix:host-1,host-2,host-3:/hbase-secure phoenix.user phoenixuser zeppelin.jdbc.auth.type KERBEROS zeppelin.jdbc.keytab.location /path/to/keytab/zeppelin.server.kerberos.keytab zeppelin.jdbc.principal zeppelinxxxxxx@EXAMPLE.COM I am trying to run a phoenix query as follows %jdbc(phoenix) create table if not exists PRICES (
SYMBOL varchar(10),
DATE varchar(10),
TIME varchar(10),
OPEN varchar(10),
HIGH varchar(10),
LOW varchar(10),
CLOSE varchar(10),
VOLUME varchar(30),
CONSTRAINT pk PRIMARY KEY (SYMBOL, DATE, TIME)
) I am getting following exception: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=1, exceptions:
Thu Mar 23 18:07:58 UTC 2017, RpcRetryingCaller{globalStartTime=1490292478525, pause=100, retries=1}, org.apache.hadoop.hbase.exceptions.RegionOpeningException: org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region SYSTEM.CATALOG,,1490292472268.e77aa4c58fd9ca2a3c7881bd081c826f. is opening on host-3,16020,1490291571015
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3059)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1007)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1942)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:416)
at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:564)
at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:692)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:489)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
what does RegionOpeningException indicate? All my region servers are alive and running, also I am able to create table using hbase shell hbase(main):001:0> whoami
hbase@EXAMPLE.COM (auth:KERBEROS)
groups: hadoop, hbase
hbase(main):002:0> create 't1','f1', 'f2', 'f3'
0 row(s) in 2.6210 seconds
=> Hbase::Table - t1
hbase(main):003:0>
... View more
Labels:
03-23-2017
06:01 PM
6 Kudos
Yes, it highly depends on your specific use case. But if you want to know in general pros and cons of each of these Frameworks, then here is a good quora thread https://www.quora.com/What-is-the-difference-between-Apache-Spark-and-Apache-Hadoop-Map-Reduce And also ofcourse the Stack overflow thread http://stackoverflow.com/questions/22167684/mapreduce-or-spark
... View more
03-22-2017
09:57 PM
1 Kudo
@Josh Elser Thanks for explanation !! This works
... View more
03-22-2017
09:43 PM
4 Kudos
Hi All, I have a secured hbase cluster and I am trying to connect to phoenix as user1 It fails with error code: 103 [user1@zk-host-2 bin]$ /usr/hdp/current/phoenix-client/bin/sqlline.py jdbc:phoenix:zk-host-1,zk-host-2,zk-host-3:/hbase-secure
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:jdbc:phoenix:zk-host-1,zk-host-2,zk-host-3:/hbase-secure none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix:jdbc:phoenix:zk-host-1,zk-host-2,zk-host-3:/hbase-secure
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/grid/0/hdp/2.6.0.2-55/phoenix/phoenix-4.7.0.2.6.0.2-55-client.j
ar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/grid/0/hdp/2.6.0.2-55/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
17/03/22 21:36:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: ERROR 103 (08004): Unable to establish connection. (state=08004,code=103)
java.sql.SQLException: ERROR 103 (08004): Unable to establish connection.
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:442)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.normalize(PhoenixEmbeddedDriver.java:395)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:217)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:203)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.io.IOException: Login failure for phoenix from keytab zk-host-1,zk-host-2,zk-host-3: javax.security.auth.login.LoginException: Unable to obtain password from user
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1098)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:307)
at org.apache.hadoop.hbase.security.User$SecureHadoopUser.login(User.java:386)
at org.apache.hadoop.hbase.security.User.login(User.java:253)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.normalize(PhoenixEmbeddedDriver.java:386)
... 17 more
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:860)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:723)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:588)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1089)
... 21 more
sqlline version 1.1.8
However, if I try to access hbase shell using the same user - it succeeds hbase(main):001:0> whoami
user1@EXAMPLE.COM (auth:KERBEROS)
groups: user1, users, hivetest
hbase(main):002:0> create 't2', 'f1', 'f2', 'f3'
0 row(s) in 1.6400 seconds
=> Hbase::Table - t2
hbase(main):003:0> !quit
Am I missing any configs?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
02-23-2017
10:36 PM
2 Kudos
I am trying to do distcp with encrypted files as below (Please note that /user/test_user is an encrypted directory) Scenario: Run below commands: kdestroy kinit -kt ~/hadoopqa/keytabs/test_user.headless.keytab test_user@EXAMPLE.COM hdfs dfs -copyFromLocal /etc/passwd /user/test_user /usr/hdp/current/hadoop-client/bin/hadoop distcp /user/test_user/passwd /user/test_user/dest I am getting this exception 17/02/15 00:08:12 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, overwrite=false, skipCRC=false, blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=100, sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[/user/test_user/passwd], targetPath=/user/test_user/dest, targetPathExists=true, filtersFile='null'}
17/02/15 00:08:12 INFO client.RMProxy: Connecting to ResourceManager at mynode.example.com/XX.XX.XX.XX:8050
17/02/15 00:08:12 INFO client.AHSProxy: Connecting to Application History server at mynode.example.com/XX.XX.XX.XX:10200
17/02/15 00:08:12 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 78 for test_user on XX.XX.XX.XX:8020
17/02/15 00:08:12 INFO security.TokenCache: Got dt for hdfs://mynode.example.com test_user)
17/02/15 00:08:12 INFO security.TokenCache: Got dt for hdfs://mynode.example.com masterKeyId=2)
17/02/15 00:08:13 INFO tools.SimpleCopyListing: Paths (files+dirs) cnt = 1; dirCnt = 0
17/02/15 00:08:13 INFO tools.SimpleCopyListing: Build file listing completed.
17/02/15 00:08:13 INFO tools.DistCp: Number of paths in the copy list: 1
17/02/15 00:08:13 INFO tools.DistCp: Number of paths in the copy list: 1
17/02/15 00:08:13 INFO client.RMProxy: Connecting to ResourceManager at mynode.example.com/XX.XX.XX.XX:8050
17/02/15 00:08:13 INFO client.AHSProxy: Connecting to Application History server at mynode.example.com/XX.XX.XX.XX:10200
17/02/15 00:08:14 INFO mapreduce.JobSubmitter: number of splits:1
17/02/15 00:08:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1487064603554_0002
17/02/15 00:08:14 INFO mapreduce.JobSubmitter: Kind: kms-dt, Service: XX.XX.XX.XX:9292, Ident: (owner=test_user, renewer=yarn, realUser=, issueDate=1487117292703, maxDate=1487722092703, sequenceNumber=2, masterKeyId=2)
17/02/15 00:08:14 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: XX.XX.XX.XX:8020, Ident: (HDFS_DELEGATION_TOKEN token 78 for test_user)
17/02/15 00:08:14 INFO impl.TimelineClientImpl: Timeline service address: https://mynode.example.com:8190/ws/v1/timeline/
17/02/15 00:08:15 INFO impl.YarnClientImpl: Submitted application application_1487064603554_0002
17/02/15 00:08:15 INFO mapreduce.Job: The url to track the job: https://mynode.example.com:8090/proxy/application_1487064603554_0002/
17/02/15 00:08:15 INFO tools.DistCp: DistCp job-id: job_1487064603554_0002
17/02/15 00:08:15 INFO mapreduce.Job: Running job: job_1487064603554_0002
17/02/15 00:08:24 INFO mapreduce.Job: Job job_1487064603554_0002 running in uber mode : false
17/02/15 00:08:24 INFO mapreduce.Job: map 0% reduce 0%
17/02/15 00:08:35 INFO mapreduce.Job: Task Id : attempt_1487064603554_0002_m_000000_0, Status : FAILED
Error: java.io.IOException: File copy failed: hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/passwd
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:287)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:255)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/passwd
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:283)
... 10 more
Caused by: java.io.IOException: Check-sum mismatch between hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/.distcp.tmp.attempt_1487064603554_0002_m_000000_0.
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/02/15 00:08:48 INFO mapreduce.Job: map 100% reduce 0%
17/02/15 00:08:48 INFO mapreduce.Job: Task Id : attempt_1487064603554_0002_m_000000_1, Status : FAILED
Error: java.io.IOException: File copy failed: hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/passwd
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:287)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:255)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/passwd
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:283)
... 10 more
Caused by: java.io.IOException: Check-sum mismatch between hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/.distcp.tmp.attempt_1487064603554_0002_m_000000_1.
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/02/15 00:08:49 INFO mapreduce.Job: map 0% reduce 0%
17/02/15 00:08:58 INFO mapreduce.Job: Task Id : attempt_1487064603554_0002_m_000000_2, Status : FAILED
Error: java.io.IOException: File copy failed: hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/passwd
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:287)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:255)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1833)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/passwd
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:283)
... 10 more
Caused by: java.io.IOException: Check-sum mismatch between hdfs://mynode.example.com hdfs://mynode.example.com:8020/user/test_user/dest/.distcp.tmp.attempt_1487064603554_0002_m_000000_2.
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130)
at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... 11 more
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/02/15 00:09:08 INFO mapreduce.Job: map 100% reduce 0%
17/02/15 00:09:12 INFO mapreduce.Job: Job job_1487064603554_0002 failed with state FAILED due to: Task failed task_1487064603554_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/02/15 00:09:12 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=41166
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=41166
Total vcore-milliseconds taken by all map tasks=41166
Total megabyte-milliseconds taken by all map tasks=42153984
17/02/15 00:09:12 ERROR tools.DistCp: Exception encountered
java.io.IOException: DistCp failure: Job job_1487064603554_0002 has failed: Task failed task_1487064603554_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
at org.apache.hadoop.tools.DistCp.waitForJobCompletion(DistCp.java:215)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:158)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:128)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
... View more
Labels: