Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3344 | 06-11-2020 02:45 PM | |
5045 | 05-01-2020 12:23 AM | |
2851 | 04-21-2020 03:38 PM | |
3561 | 04-14-2020 12:26 AM | |
2346 | 02-27-2020 05:51 PM |
06-27-2018
10:23 AM
1 Kudo
The solution to your issue likely depends on what type of files back your table, but if you are using parquet, this option is probably what you are looking for: set PARQUET_FALLBACK_SCHEMA_RESOLUTION=name; https://www.cloudera.com/documentation/enterprise/5-8-x/topics/impala_parquet.html#parquet_schema_evolution The issue is that by default, impala expects every entry in the parquet schema to be at the same ordinal position. If you add to your schema anywhere but the end impala starts throwing errors. The option above makes impala flexible about the ordinal positions within the parquet files.
... View more
05-24-2018
07:27 AM
I have. It gives me no information apart from error code:
Log Upload Time: Thu May 24 18:56:02 +0530 2018
Log Length: 34607
2018-05-24 18:54:27,921 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1527163745858_0517_000001
2018-05-24 18:54:29,584 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2018-05-24 18:54:29,584 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@39d12b10)
2018-05-24 18:54:30,453 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: RM_DELEGATION_TOKEN, Service: 172.31.4.192:8032, Ident: (RM_DELEGATION_TOKEN owner=hue, renewer=oozie mr token, realUser=oozie, issueDate=1527168258855, maxDate=1527773058855, sequenceNumber=613, masterKeyId=2)
2018-05-24 18:54:33,015 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-05-24 18:54:33,042 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2018-05-24 18:54:33,532 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter
2018-05-24 18:54:33,534 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter
2018-05-24 18:54:33,649 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2018-05-24 18:54:33,681 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2018-05-24 18:54:33,683 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2018-05-24 18:54:33,684 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2018-05-24 18:54:33,684 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2018-05-24 18:54:33,685 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2018-05-24 18:54:33,686 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2018-05-24 18:54:33,687 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2018-05-24 18:54:33,821 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020]
2018-05-24 18:54:33,937 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020]
2018-05-24 18:54:34,022 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020]
2018-05-24 18:54:34,076 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2018-05-24 18:54:34,225 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2018-05-24 18:54:35,260 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-05-24 18:54:35,465 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-05-24 18:54:35,465 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2018-05-24 18:54:35,495 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1527163745858_0517 to jobTokenSecretManager
2018-05-24 18:54:35,843 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1527163745858_0517 because: not enabled;
2018-05-24 18:54:35,941 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1527163745858_0517 = 0. Number of splits = 1
2018-05-24 18:54:35,941 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1527163745858_0517 = 0
2018-05-24 18:54:35,941 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1527163745858_0517Job Transitioned from NEW to INITED
2018-05-24 18:54:35,943 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1527163745858_0517.
2018-05-24 18:54:36,091 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100
2018-05-24 18:54:36,176 INFO [Socket Reader #1 for port 43856] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 43856
2018-05-24 18:54:36,266 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2018-05-24 18:54:36,284 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-05-24 18:54:36,326 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at ip-172-31-5-201.ap-south-1.compute.internal/172.31.5.201:43856
2018-05-24 18:54:36,329 INFO [IPC Server listener on 43856] org.apache.hadoop.ipc.Server: IPC Server listener on 43856: starting
2018-05-24 18:54:36,585 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-05-24 18:54:36,655 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2018-05-24 18:54:36,679 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2018-05-24 18:54:36,691 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-05-24 18:54:36,697 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2018-05-24 18:54:36,697 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2018-05-24 18:54:36,701 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2018-05-24 18:54:36,701 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2018-05-24 18:54:36,761 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 36747
2018-05-24 18:54:36,761 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4
2018-05-24 18:54:36,898 INFO [main] org.mortbay.log: Extract jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hadoop-yarn-common-2.6.0-cdh5.10.1.jar!/webapps/mapreduce to ./tmp/Jetty_0_0_0_0_36747_mapreduce____.wrg2hq/webapp
2018-05-24 18:54:38,183 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:36747
2018-05-24 18:54:38,184 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 36747
2018-05-24 18:54:39,423 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2018-05-24 18:54:39,516 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
2018-05-24 18:54:39,523 INFO [Socket Reader #1 for port 37627] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 37627
2018-05-24 18:54:39,578 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-05-24 18:54:39,588 INFO [IPC Server listener on 37627] org.apache.hadoop.ipc.Server: IPC Server listener on 37627: starting
2018-05-24 18:54:40,346 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2018-05-24 18:54:40,346 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2018-05-24 18:54:40,346 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2018-05-24 18:54:40,565 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at ip-172-31-4-192.ap-south-1.compute.internal/172.31.4.192:8030
2018-05-24 18:54:40,873 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:25600, vCores:8>
2018-05-24 18:54:40,873 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.users.hue
2018-05-24 18:54:40,886 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2018-05-24 18:54:40,887 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2018-05-24 18:54:40,959 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1527163745858_0517Job Transitioned from INITED to SETUP
2018-05-24 18:54:40,991 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2018-05-24 18:54:41,012 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1527163745858_0517Job Transitioned from SETUP to RUNNING
2018-05-24 18:54:41,123 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1527163745858_0517_m_000000 Task Transitioned from NEW to SCHEDULED
2018-05-24 18:54:41,135 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2018-05-24 18:54:41,294 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:1024, vCores:1>
2018-05-24 18:54:41,439 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1527163745858_0517, File: hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/hue/.staging/job_1527163745858_0517/job_1527163745858_0517_1.jhist
2018-05-24 18:54:41,894 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:54:42,056 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1527163745858_0517: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:180224, vCores:35> knownNMs=6
2018-05-24 18:54:42,324 WARN [DataStreamer for file /user/hue/.staging/job_1527163745858_0517/job_1527163745858_0517_1_conf.xml] org.apache.hadoop.hdfs.DFSClient: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1281)
at java.lang.Thread.join(Thread.java:1355)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:951)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:689)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:878)
2018-05-24 18:54:42,382 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020]
2018-05-24 18:54:43,088 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2018-05-24 18:54:43,205 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1527163745858_0517_01_000002 to attempt_1527163745858_0517_m_000000_0
2018-05-24 18:54:43,206 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:54:43,297 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Job jar is not present. Not adding any jar to the list of resources.
2018-05-24 18:54:43,352 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/hue/.staging/job_1527163745858_0517/job.xml
2018-05-24 18:54:43,654 WARN [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.util.MRApps: cache archive (mapreduce.job.cache.archives) hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/hue/mysql-connector-java-5.0.8-bin.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/oozie/share/lib/lib_20170413135352/sqoop/mysql-connector-java-5.0.8-bin.jar This will be an error in Hadoop 2.0
2018-05-24 18:54:44,041 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #1 tokens and #1 secret keys for NM use for launching container
2018-05-24 18:54:44,041 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 2
2018-05-24 18:54:44,041 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2018-05-24 18:54:45,097 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m
2018-05-24 18:54:45,101 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2018-05-24 18:54:45,109 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1527163745858_0517: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:179200, vCores:34> knownNMs=6
2018-05-24 18:54:45,146 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1527163745858_0517_01_000002 taskAttempt attempt_1527163745858_0517_m_000000_0
2018-05-24 18:54:45,149 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1527163745858_0517_m_000000_0
2018-05-24 18:54:45,350 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1527163745858_0517_m_000000_0 : 13562
2018-05-24 18:54:45,352 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1527163745858_0517_m_000000_0] using containerId: [container_1527163745858_0517_01_000002 on NM: [ip-172-31-1-207.ap-south-1.compute.internal:8041]
2018-05-24 18:54:45,355 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2018-05-24 18:54:45,356 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1527163745858_0517_m_000000 Task Transitioned from SCHEDULED to RUNNING
2018-05-24 18:54:52,626 INFO [Socket Reader #1 for port 37627] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1527163745858_0517 (auth:SIMPLE)
2018-05-24 18:54:52,800 INFO [IPC Server handler 5 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1527163745858_0517_m_000002 asked for a task
2018-05-24 18:54:52,818 INFO [IPC Server handler 5 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1527163745858_0517_m_000002 given task: attempt_1527163745858_0517_m_000000_0
2018-05-24 18:55:08,465 INFO [Socket Reader #1 for port 37627] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1527163745858_0517 (auth:SIMPLE)
2018-05-24 18:55:20,711 INFO [Socket Reader #1 for port 37627] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1527163745858_0517 (auth:SIMPLE)
2018-05-24 18:55:20,793 INFO [IPC Server handler 3 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1527163745858_0517_m_000000_0 is : 1.0
2018-05-24 18:55:30,659 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1527163745858_0517_01_000002
2018-05-24 18:55:30,660 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:55:30,660 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1527163745858_0517_m_000000_0: Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal
2018-05-24 18:55:30,662 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_0 TaskAttempt Transitioned from RUNNING to FAILED
2018-05-24 18:55:30,711 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1527163745858_0517_01_000002 taskAttempt attempt_1527163745858_0517_m_000000_0
2018-05-24 18:55:30,719 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_1 TaskAttempt Transitioned from NEW to UNASSIGNED
2018-05-24 18:55:30,725 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on node ip-172-31-1-207.ap-south-1.compute.internal
2018-05-24 18:55:30,726 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1527163745858_0517_m_000000_1 to list of failed maps
2018-05-24 18:55:31,660 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:55:31,688 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1527163745858_0517: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:71168, vCores:40> knownNMs=6
2018-05-24 18:55:32,700 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2018-05-24 18:55:32,700 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1527163745858_0517_01_000003, NodeId: ip-172-31-1-207.ap-south-1.compute.internal:8041, NodeHttpAddress: ip-172-31-1-207.ap-south-1.compute.internal:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 172.31.1.207:8041 }, ] to fast fail map
2018-05-24 18:55:32,700 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2018-05-24 18:55:32,700 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1527163745858_0517_01_000003 to attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:32,700 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:55:32,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m
2018-05-24 18:55:32,710 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_1 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2018-05-24 18:55:32,725 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1527163745858_0517_01_000003 taskAttempt attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:32,725 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:32,991 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1527163745858_0517_m_000000_1 : 13562
2018-05-24 18:55:32,993 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1527163745858_0517_m_000000_1] using containerId: [container_1527163745858_0517_01_000003 on NM: [ip-172-31-1-207.ap-south-1.compute.internal:8041]
2018-05-24 18:55:32,993 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_1 TaskAttempt Transitioned from ASSIGNED to RUNNING
2018-05-24 18:55:33,715 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1527163745858_0517: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:74240, vCores:41> knownNMs=6
2018-05-24 18:55:39,936 INFO [Socket Reader #1 for port 37627] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1527163745858_0517 (auth:SIMPLE)
2018-05-24 18:55:40,020 INFO [IPC Server handler 9 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1527163745858_0517_m_000003 asked for a task
2018-05-24 18:55:40,021 INFO [IPC Server handler 9 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1527163745858_0517_m_000003 given task: attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:51,261 INFO [Socket Reader #1 for port 37627] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1527163745858_0517 (auth:SIMPLE)
2018-05-24 18:55:51,314 INFO [IPC Server handler 22 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1527163745858_0517_m_000000_1 is : 0.0
2018-05-24 18:55:51,472 INFO [IPC Server handler 25 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1527163745858_0517_m_000000_1 is : 1.0
2018-05-24 18:55:51,562 INFO [IPC Server handler 23 on 37627] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:51,595 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_1 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER
2018-05-24 18:55:51,595 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:51,603 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1527163745858_0517_m_000000 Task Transitioned from RUNNING to SUCCEEDED
2018-05-24 18:55:51,604 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2018-05-24 18:55:51,604 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1527163745858_0517Job Transitioned from RUNNING to COMMITTING
2018-05-24 18:55:51,621 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_COMMIT
2018-05-24 18:55:51,859 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent
2018-05-24 18:55:51,860 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1527163745858_0517Job Transitioned from COMMITTING to SUCCEEDED
2018-05-24 18:55:51,883 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2018-05-24 18:55:51,883 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2018-05-24 18:55:51,883 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true
2018-05-24 18:55:51,883 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2018-05-24 18:55:51,884 INFO [Thread-73] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2018-05-24 18:55:51,884 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2018-05-24 18:55:51,892 INFO [Thread-73] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0
2018-05-24 18:55:51,968 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:55:52,261 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/hue/.staging/job_1527163745858_0517/job_1527163745858_0517_1.jhist to hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517-1527168259391-hue-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DERP_new%2Dcopy%3AA%3Dsqoop%2Dd90b-1527168351857-1-0-SUCCEEDED-root.users.hue-1527168280941.jhist_tmp
2018-05-24 18:55:52,647 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517-1527168259391-hue-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DERP_new%2Dcopy%3AA%3Dsqoop%2Dd90b-1527168351857-1-0-SUCCEEDED-root.users.hue-1527168280941.jhist_tmp
2018-05-24 18:55:52,677 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/hue/.staging/job_1527163745858_0517/job_1527163745858_0517_1_conf.xml to hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517_conf.xml_tmp
2018-05-24 18:55:52,980 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1527163745858_0517_01_000003
2018-05-24 18:55:52,981 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:55:52,981 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1527163745858_0517_m_000000_1:
2018-05-24 18:55:52,982 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1527163745858_0517_m_000000_1 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED
2018-05-24 18:55:52,998 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1527163745858_0517_01_000003 taskAttempt attempt_1527163745858_0517_m_000000_1
2018-05-24 18:55:53,034 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517_conf.xml_tmp
2018-05-24 18:55:53,060 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517.summary_tmp to hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517.summary
2018-05-24 18:55:53,068 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517_conf.xml_tmp to hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517_conf.xml
2018-05-24 18:55:53,077 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517-1527168259391-hue-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DERP_new%2Dcopy%3AA%3Dsqoop%2Dd90b-1527168351857-1-0-SUCCEEDED-root.users.hue-1527168280941.jhist_tmp to hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020/user/history/done_intermediate/hue/job_1527163745858_0517-1527168259391-hue-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DERP_new%2Dcopy%3AA%3Dsqoop%2Dd90b-1527168351857-1-0-SUCCEEDED-root.users.hue-1527168280941.jhist
2018-05-24 18:55:53,109 INFO [Thread-73] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2018-05-24 18:55:53,210 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to
2018-05-24 18:55:53,211 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://ip-172-31-4-192.ap-south-1.compute.internal:19888/jobhistory/job/job_1527163745858_0517
2018-05-24 18:55:53,237 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2018-05-24 18:55:54,247 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:0
2018-05-24 18:55:54,249 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://ip-172-31-4-192.ap-south-1.compute.internal:8020 /user/hue/.staging/job_1527163745858_0517
2018-05-24 18:55:54,278 INFO [Thread-73] org.apache.hadoop.ipc.Server: Stopping server on 37627
2018-05-24 18:55:54,386 INFO [IPC Server listener on 37627] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 37627
2018-05-24 18:55:54,390 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-05-24 18:55:54,495 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
2018-05-24 18:55:54,515 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted
2018-05-24 18:55:54,540 INFO [Thread-73] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Job end notification started for jobID : job_1527163745858_0517
2018-05-24 18:55:54,550 INFO [Thread-73] org.mortbay.log: Job end notification attempts left 0
2018-05-24 18:55:54,550 INFO [Thread-73] org.mortbay.log: Job end notification trying http://ip-172-31-4-192.ap-south-1.compute.internal:11000/oozie/callback?id=0000003-180524173424234-oozie-oozi-W@sqoop-d90b&status=SUCCEEDED
2018-05-24 18:55:54,612 INFO [Thread-73] org.mortbay.log: Job end notification to http://ip-172-31-4-192.ap-south-1.compute.internal:11000/oozie/callback?id=0000003-180524173424234-oozie-oozi-W@sqoop-d90b&status=SUCCEEDED succeeded
2018-05-24 18:55:54,612 INFO [Thread-73] org.mortbay.log: Job end notification succeeded for job_1527163745858_0517
2018-05-24 18:55:59,639 INFO [Thread-73] org.apache.hadoop.ipc.Server: Stopping server on 43856
2018-05-24 18:55:59,674 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-05-24 18:55:59,682 INFO [IPC Server listener on 43856] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 43856
2018-05-24 18:55:59,712 INFO [Thread-73] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0
Please help me fix this. I have been trying from quite some time
... View more
05-23-2018
01:06 AM
Your HiveMetaStore server is not up and running probably. You need to check the HMS server log to see what's happening.
... View more
05-21-2018
06:10 PM
You will need to migrate the backend database to the new host. How to do it will be subject to which DB you are using. Please refer to the DB vendor for details. Once that DB migrate is done, then you can remove sentry and add sentry again to the new host as normal.
... View more
05-03-2018
09:54 PM
Hi Vitalii, I think you are correct, based on below: https://github.com/cloudera/sqoop/blob/cdh5-1.4.6_5.14.0/src/java/org/apache/sqoop/SqoopOptions.java#L1719-L1721 public void setInputFieldsTerminatedBy(char c) {
this.inputDelimiters.setFieldsTerminatedBy(c);
} https://github.com/cloudera/sqoop/blob/cdh5-1.4.6_5.14.0/src/java/org/apache/sqoop/SqoopOptions.java#L1816-L1818 public void setFieldsTerminatedBy(char c) {
this.outputDelimiters.setFieldsTerminatedBy(c);
} However, I can see that when --direct is used, 1. it calls class DirectMySQLManager: https://github.com/cloudera/sqoop/blob/cdh5-1.4.6_5.14.0/src/java/org/apache/sqoop/manager/DirectMySQLManager.java#L103 public void exportTable(com.cloudera.sqoop.manager.ExportJobContext context)
throws IOException, ExportException {
context.setConnManager(this);
MySQLExportJob exportJob = new MySQLExportJob(context);
exportJob.runExport();
} 2. in MySQLExportJob class, it actually uses getOutputFieldDelim() function https://github.com/cloudera/sqoop/blob/cdh5-1.4.6_5.14.0/src/java/org/apache/sqoop/mapreduce/MySQLExportJob.java#L57-L58 conf.setInt(MySQLUtils.OUTPUT_FIELD_DELIM_KEY,
options.getOutputFieldDelim()); This explains that we need to use --fields-terminated-by rather than --input-fields-terminated-by. It looks like that it is considered as output for MySQL, as code uses MySQLUtils.OUTPUT_FIELD_DELIM_KEY. I am not sure if it is expected or a bug. I will follow up with our engineering team. Look out for another update sometime next week.
... View more
04-26-2018
12:54 PM
1 Kudo
Hello, We seem to have a similar problem with impala-shell also on CDH 5.13.2 (and now 5.13.3). We are using Active Directory KDCs with a one-way trust established between a writable Hadoop AD (KRB.DOMAIN.COM) and our main user AD (USERS.DOMAIN.COM). In addition, our servers are deployed to a third domain (server.domain.com) which does not have an associated KDC. krb5.conf file: [libdefaults]
default_realm = KRB.DOMAIN.COM
dns_lookup_kdc = true
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts aes128-cts rc4-hmac
default_tkt_enctypes = aes256-cts aes128-cts rc4-hmac
permitted_enctypes = aes256-cts aes128-cts rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
rdns = false
#default_ccache_name = KEYRING:persistent:%{uid}
[realms]
KRB.DOMAIN.COM = {
kdc = krb.domain.com
admin_server = krb.domain.com
default_domain = krb.domain.com
}
USERS.DOMAIN.COM = {
kdc = users.domain.com
admin_server = users.domain.com
}
[domain_realm]
krb.domain.com = KRB.DOMAIN.COM
.krb.domain.com = KRB.DOMAIN.COM
users.domain.com = USERS.DOMAIN.COM
.users.domain.com = USERS.DOMAIN.COM Below is how we invoke impala-shell and the tracing statements I'm seeing after manipulating the KRB5_TRACE env variable: [user1@edgenode ~]$ \impala-shell -k --ssl -i daemonnode.server.domain.com:21000
Starting Impala Shell using Kerberos authentication
Using service name 'impala'
SSL is enabled. Impala server certificates will NOT be verified (set --ca_cert to change)
[22712] 1524768162.661368: ccselect can't find appropriate cache for server principal impala/daemonnode.server.domain.com@
[22712] 1524768162.661450: Getting credentials user1@USERS.DOMAIN.COM -> impala/daemonnode.server.domain.com@ using ccache FILE:/tmp/krb5cc_738475
[22712] 1524768162.661513: Retrieving user1@USERS.DOMAIN.COM -> impala/daemonnode.server.domain.com@ from FILE:/tmp/krb5cc_738475 with result: -1765328243/Matching credential not found (filename: /tmp/krb5cc_738475)
[22712] 1524768162.661558: Retrying user1@USERS.DOMAIN.COM -> impala/daemonnode.server.domain.com@USERS.DOMAIN.COM with result: -1765328243/Matching credential not found (filename: /tmp/krb5cc_738475)
[22712] 1524768162.661563: Server has referral realm; starting with impala/daemonnode.server.domain.com@USERS.DOMAIN.COM
[22712] 1524768162.661627: Retrieving user1@USERS.DOMAIN.COM -> krbtgt/USERS.DOMAIN.COM@USERS.DOMAIN.COM from FILE:/tmp/krb5cc_738475 with result: 0/Success
[22712] 1524768162.661633: Starting with TGT for client realm: user1@USERS.DOMAIN.COM -> krbtgt/USERS.DOMAIN.COM@USERS.DOMAIN.COM
[22712] 1524768162.661640: Requesting tickets for impala/daemonnode.server.domain.com@USERS.DOMAIN.COM, referrals on
[22712] 1524768162.661662: Generated subkey for TGS request: aes256-cts/56A9
[22712] 1524768162.661696: etypes requested in TGS request: aes256-cts, aes128-cts, rc4-hmac
[22712] 1524768162.661790: Encoding request body and padata into FAST request
[22712] 1524768162.661917: Sending request (9771 bytes) to USERS.DOMAIN.COM
[22712] 1524768162.662009: Resolving hostname users.domain.com
[22712] 1524768162.701885: Initiating TCP connection to stream XXX.XXX.XXX.XXX:88
[22712] 1524768162.739233: Sending TCP request to stream XXX.XXX.XXX.XXX:88
[22712] 1524768162.836656: Received answer (351 bytes) from stream XXX.XXX.XXX.XXX:88
[22712] 1524768162.836673: Terminating TCP connection to stream XXX.XXX.XXX.XXX:88
[22712] 1524768164.121113: Response was not from master KDC
[22712] 1524768164.121153: Decoding FAST response
[22712] 1524768164.121209: TGS request result: -1765328377/Server not found in Kerberos database
[22712] 1524768164.121230: Local realm referral failed; trying fallback realm SERVER.DOMAIN.COM
[22712] 1524768164.121313: Retrieving user1@USERS.DOMAIN.COM -> krbtgt/SERVER.DOMAIN.COM@SERVER.DOMAIN.COM from FILE:/tmp/krb5cc_738475 with result: -1765328243/Matching credential not found (filename: /tmp/krb5cc_738475)
[22712] 1524768164.121366: Retrieving user1@USERS.DOMAIN.COM -> krbtgt/USERS.DOMAIN.COM@USERS.DOMAIN.COM from FILE:/tmp/krb5cc_738475 with result: 0/Success
[22712] 1524768164.121372: Starting with TGT for client realm: user1@USERS.DOMAIN.COM -> krbtgt/USERS.DOMAIN.COM@USERS.DOMAIN.COM
[22712] 1524768164.121417: Retrieving user1@USERS.DOMAIN.COM -> krbtgt/SERVER.DOMAIN.COM@SERVER.DOMAIN.COM from FILE:/tmp/krb5cc_738475 with result: -1765328243/Matching credential not found (filename: /tmp/krb5cc_738475)
[22712] 1524768164.121422: Requesting TGT krbtgt/SERVER.DOMAIN.COM@USERS.DOMAIN.COM using TGT krbtgt/USERS.DOMAIN.COM@USERS.DOMAIN.COM
[22712] 1524768164.121434: Generated subkey for TGS request: aes256-cts/31FF
[22712] 1524768164.121454: etypes requested in TGS request: aes256-cts, aes128-cts, rc4-hmac
[22712] 1524768164.121530: Encoding request body and padata into FAST request
[22712] 1524768164.121623: Sending request (9748 bytes) to USERS.DOMAIN.COM
[22712] 1524768164.121649: Resolving hostname users.domain.com
[22712] 1524768164.122515: Initiating TCP connection to stream YYY.YYY.YYY.YYY:88
[22712] 1524768164.126676: Sending TCP request to stream YYY.YYY.YYY.YYY:88
[22712] 1524768164.135666: Received answer (329 bytes) from stream YYY.YYY.YYY.YYY:88
[22712] 1524768164.135673: Terminating TCP connection to stream YYY.YYY.YYY.YYY:88
[22712] 1524768164.137693: Response was not from master KDC
[22712] 1524768164.137706: Decoding FAST response
[22712] 1524768164.137731: TGS request result: -1765328377/Server not found in Kerberos database
Error connecting: TTransportException, Could not start SASL: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.10.0-cdh5.13.3 (15a453e) built on Sat Mar 17 03:48:31 PDT 2018)
Want to know what version of Impala you're connected to? Run the VERSION command to
find out!
***********************************************************************************
[Not connected] > exit;
Goodbye user1 I think the issue may be in what service principal impala-shell is requesting a ticket for. It should be asking for impala/daemonnode.server.domain.com@KRB.DOMAIN.COM but I'm not sure if it ever does. When I obtain the keytab for that service principal and kinit against it, impala-shell connects. Additionally, I've been able to authenticate via Kerberos using Tableau (and the Cloudera ODBC driver), which grants me finer-grain control over what specific service principal needs to be authenticated.
... View more
04-19-2018
04:27 AM
Take a look to this: http://community.cloudera.com/t5/Interactive-Short-cycle-SQL/Impala-ODBC-JDBC-bad-performance-rows-fetch-is-very-slow-from-a/m-p/61152#M3751 Good luck.
... View more
04-10-2018
08:57 AM
Hi Eric: Thanks for your explanation. Would you be able to point us to the formal licensing statements stating the same? It would be required by our corporate to approve CDK (and CDS2 for that matter) for production use. Miles
... View more
04-04-2018
01:26 AM
1 Kudo
Have you tried to DROP FUNCTION and then RELOAD to reload the new JAR file? And then re-CREATE the function to see if it will work?
... View more
03-22-2018
05:59 AM
Looks like you already have another thread opened: http://community.cloudera.com/t5/Batch-SQL-Apache-Hive/Hive-Safety-Valve-configuration-is-not-applied-HiveConf-of-name/td-p/64037 Will follow up there.
... View more