Member since
01-11-2019
8
Posts
0
Kudos Received
0
Solutions
12-03-2019
10:44 PM
Thanks for update , the new version of Druid 0.17.0 is not at release. Is this feature available in Druid 0.16.0?
... View more
07-11-2019
03:43 PM
Connected to LLAP string , Insert statement throws vertex error Borker and coordinator configuration properties are set properly only, still this vertex error throws, Please help 0: jdbc:hive2://ipadress:2181> CREATE EXTERNAL TABLE upgdevdb1.hivedruid_test5 (`__time` TIMESTAMP, `course` STRING, `id` STRING, `name` STRING, `year` STRING) Stored BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'; INFO : Compiling command(queryId=hive_20190711193720_8376c2c0-bd72-413d-8c81-033ed55e94d9): CREATE EXTERNAL TABLE upgdevdb1.hivedruid_test5 (`__time` TIMESTAMP, `course` STRING, `id` STRING, `name` STRING, `year` STRING) Stored BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20190711193720_8376c2c0-bd72-413d-8c81-033ed55e94d9); Time taken: 0.028 seconds INFO : Executing command(queryId=hive_20190711193720_8376c2c0-bd72-413d-8c81-033ed55e94d9): CREATE EXTERNAL TABLE upgdevdb1.hivedruid_test5 (`__time` TIMESTAMP, `course` STRING, `id` STRING, `name` STRING, `year` STRING) Stored BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20190711193720_8376c2c0-bd72-413d-8c81-033ed55e94d9); Time taken: 0.129 seconds INFO : OK No rows affected (0.174 seconds) 0: jdbc:hive2://ipaddress:2181> insert into hivedruid_test5 values(1541193507549,'anil','1','test','2016'); INFO : Compiling command(queryId=hive_20190711193731_3b4cf0a3-b3a3-4701-a1f3-5d8839b18ee7): insert into hivedruid_test5 values(1541193507549,'anil','1','test','2016') INFO : Semantic Analysis Completed (retrial = false) INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, type:timestamp, comment:null), FieldSchema(name:_col1, type:string, comment:null), FieldSchema(name:_col2, type:string, comment:null), FieldSchema(name:_col3, type:string, comment:null), FieldSchema(name:_col4, type:string, comment:null)], properties:null) INFO : Completed compiling command(queryId=hive_20190711193731_3b4cf0a3-b3a3-4701-a1f3-5d8839b18ee7); Time taken: 0.405 seconds INFO : Executing command(queryId=hive_20190711193731_3b4cf0a3-b3a3-4701-a1f3-5d8839b18ee7): insert into hivedruid_test5 values(1541193507549,'anil','1','test','2016') INFO : Query ID = hive_20190711193731_3b4cf0a3-b3a3-4701-a1f3-5d8839b18ee7 INFO : Total jobs = 1 INFO : Starting task [Stage-0:DDL] in serial mode INFO : Starting task [Stage-1:DDL] in serial mode INFO : Launching Job 1 out of 1 INFO : Starting task [Stage-2:MAPRED] in serial mode INFO : Subscribed to counters: [] for queryId: hive_20190711193731_3b4cf0a3-b3a3-4701-a1f3-5d8839b18ee7 INFO : Session is already open INFO : Dag name: insert into hivedruid...','1','test','2016') (Stage-2) ERROR : Status: Failed ERROR : Vertex failed, vertexName=Reducer 2, vertexId=vertex_1562408971899_20433_98_01, diagnostics=[Task failed, taskId=task_1562408971899_20433_98_01_000007, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.RuntimeException: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:401) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:249) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.createUTC(FormatConfig.java:71) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.<clinit>(FormatConfig.java:23) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:19) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:24) at org.apache.hive.druid.io.druid.jackson.JodaStuff.register(JodaStuff.java:54) at org.apache.hive.druid.io.druid.jackson.DruidDefaultSerializersModule.<init>(DruidDefaultSerializersModule.java:49) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:46) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:35) at org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.<clinit>(DruidStorageHandlerUtils.java:227) at org.apache.hadoop.hive.druid.io.DruidOutputFormat.getHiveRecordWriter(DruidOutputFormat.java:95) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:297) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:786) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:737) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:903) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111) at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:490) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:392) ... 18 more , errorMessage=Cannot recover from this error:java.lang.RuntimeException: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:401) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:249) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.createUTC(FormatConfig.java:71) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.<clinit>(FormatConfig.java:23) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:19) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:24) at org.apache.hive.druid.io.druid.jackson.JodaStuff.register(JodaStuff.java:54) at org.apache.hive.druid.io.druid.jackson.DruidDefaultSerializersModule.<init>(DruidDefaultSerializersModule.java:49) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:46) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:35) at org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.<clinit>(DruidStorageHandlerUtils.java:227) at org.apache.hadoop.hive.druid.io.DruidOutputFormat.getHiveRecordWriter(DruidOutputFormat.java:95) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:297) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:786) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:737) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:903) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111) at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:490) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:392) ... 18 more ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:0, Vertex vertex_1562408971899_20433_98_01 [Reducer 2] killed/failed due to:OWN_TASK_FAILURE] ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 INFO : org.apache.tez.common.counters.DAGCounter: INFO : NUM_FAILED_TASKS: 1 INFO : NUM_SUCCEEDED_TASKS: 113 INFO : TOTAL_LAUNCHED_TASKS: 114 INFO : AM_CPU_MILLISECONDS: 260 INFO : AM_GC_TIME_MILLIS: 0 INFO : File System Counters: INFO : FILE_BYTES_READ: 0 INFO : FILE_BYTES_WRITTEN: 2780 INFO : FILE_READ_OPS: 0 INFO : FILE_LARGE_READ_OPS: 0 INFO : FILE_WRITE_OPS: 0 INFO : HDFS_BYTES_READ: 0 INFO : HDFS_BYTES_WRITTEN: 0 INFO : HDFS_READ_OPS: 0 INFO : HDFS_LARGE_READ_OPS: 0 INFO : HDFS_WRITE_OPS: 0 INFO : org.apache.tez.common.counters.TaskCounter: INFO : REDUCE_INPUT_GROUPS: 0 INFO : REDUCE_INPUT_RECORDS: 0 INFO : COMBINE_INPUT_RECORDS: 0 INFO : SPILLED_RECORDS: 1 INFO : NUM_SHUFFLED_INPUTS: 0 INFO : NUM_SKIPPED_INPUTS: 112 INFO : NUM_FAILED_SHUFFLE_INPUTS: 0 INFO : MERGED_MAP_OUTPUTS: 0 INFO : TASK_DURATION_MILLIS: 3672 INFO : INPUT_RECORDS_PROCESSED: 4 INFO : INPUT_SPLIT_LENGTH_BYTES: 1 INFO : OUTPUT_RECORDS: 1 INFO : OUTPUT_LARGE_RECORDS: 0 INFO : OUTPUT_BYTES: 38 INFO : OUTPUT_BYTES_WITH_OVERHEAD: 46 INFO : OUTPUT_BYTES_PHYSICAL: 60 INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0 INFO : ADDITIONAL_SPILLS_BYTES_READ: 0 INFO : ADDITIONAL_SPILL_COUNT: 0 INFO : SHUFFLE_CHUNK_COUNT: 1 INFO : SHUFFLE_BYTES: 0 INFO : SHUFFLE_BYTES_DECOMPRESSED: 0 INFO : SHUFFLE_BYTES_TO_MEM: 0 INFO : SHUFFLE_BYTES_TO_DISK: 0 INFO : SHUFFLE_BYTES_DISK_DIRECT: 0 INFO : NUM_MEM_TO_DISK_MERGES: 0 INFO : NUM_DISK_TO_DISK_MERGES: 0 INFO : SHUFFLE_PHASE_TIME: 261 INFO : MERGE_PHASE_TIME: 284 INFO : FIRST_EVENT_RECEIVED: 247 INFO : LAST_EVENT_RECEIVED: 247 INFO : HIVE: INFO : DESERIALIZE_ERRORS: 0 INFO : RECORDS_IN_Map_1: 3 INFO : RECORDS_OUT_1_upgdevdb1.hivedruid_test5: 0 INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 1 INFO : RECORDS_OUT_INTERMEDIATE_Reducer_2: 0 INFO : RECORDS_OUT_OPERATOR_FS_10: 0 INFO : RECORDS_OUT_OPERATOR_MAP_0: 0 INFO : RECORDS_OUT_OPERATOR_RS_7: 1 INFO : RECORDS_OUT_OPERATOR_SEL_1: 1 INFO : RECORDS_OUT_OPERATOR_SEL_3: 1 INFO : RECORDS_OUT_OPERATOR_SEL_6: 1 INFO : RECORDS_OUT_OPERATOR_SEL_9: 0 INFO : RECORDS_OUT_OPERATOR_TS_0: 1 INFO : RECORDS_OUT_OPERATOR_UDTF_2: 1 INFO : Shuffle Errors: INFO : BAD_ID: 0 INFO : CONNECTION: 0 INFO : IO_ERROR: 0 INFO : WRONG_LENGTH: 0 INFO : WRONG_MAP: 0 INFO : WRONG_REDUCE: 0 INFO : Shuffle Errors_Reducer_2_INPUT_Map_1: INFO : BAD_ID: 0 INFO : CONNECTION: 0 INFO : IO_ERROR: 0 INFO : WRONG_LENGTH: 0 INFO : WRONG_MAP: 0 INFO : WRONG_REDUCE: 0 INFO : TaskCounter_Map_1_INPUT__dummy_table: INFO : INPUT_RECORDS_PROCESSED: 4 INFO : INPUT_SPLIT_LENGTH_BYTES: 1 INFO : TaskCounter_Map_1_OUTPUT_Reducer_2: INFO : ADDITIONAL_SPILLS_BYTES_READ: 0 INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0 INFO : ADDITIONAL_SPILL_COUNT: 0 INFO : OUTPUT_BYTES: 38 INFO : OUTPUT_BYTES_PHYSICAL: 60 INFO : OUTPUT_BYTES_WITH_OVERHEAD: 46 INFO : OUTPUT_LARGE_RECORDS: 0 INFO : OUTPUT_RECORDS: 1 INFO : SHUFFLE_CHUNK_COUNT: 1 INFO : SPILLED_RECORDS: 1 INFO : TaskCounter_Reducer_2_INPUT_Map_1: INFO : ADDITIONAL_SPILLS_BYTES_READ: 0 INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0 INFO : COMBINE_INPUT_RECORDS: 0 INFO : FIRST_EVENT_RECEIVED: 247 INFO : LAST_EVENT_RECEIVED: 247 INFO : MERGED_MAP_OUTPUTS: 0 INFO : MERGE_PHASE_TIME: 284 INFO : NUM_DISK_TO_DISK_MERGES: 0 INFO : NUM_FAILED_SHUFFLE_INPUTS: 0 INFO : NUM_MEM_TO_DISK_MERGES: 0 INFO : NUM_SHUFFLED_INPUTS: 0 INFO : NUM_SKIPPED_INPUTS: 112 INFO : REDUCE_INPUT_GROUPS: 0 INFO : REDUCE_INPUT_RECORDS: 0 INFO : SHUFFLE_BYTES: 0 INFO : SHUFFLE_BYTES_DECOMPRESSED: 0 INFO : SHUFFLE_BYTES_DISK_DIRECT: 0 INFO : SHUFFLE_BYTES_TO_DISK: 0 INFO : SHUFFLE_BYTES_TO_MEM: 0 INFO : SHUFFLE_PHASE_TIME: 261 INFO : SPILLED_RECORDS: 0 INFO : TaskCounter_Reducer_2_OUTPUT_out_Reducer_2: INFO : OUTPUT_RECORDS: 0 INFO : org.apache.hadoop.hive.llap.counters.LlapWmCounters: INFO : GUARANTEED_QUEUED_NS: 0 INFO : GUARANTEED_RUNNING_NS: 0 INFO : SPECULATIVE_QUEUED_NS: 5345368 INFO : SPECULATIVE_RUNNING_NS: 1908978794 ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Reducer 2, vertexId=vertex_1562408971899_20433_98_01, diagnostics=[Task failed, taskId=task_1562408971899_20433_98_01_000007, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.RuntimeException: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:401) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:249) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.createUTC(FormatConfig.java:71) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.<clinit>(FormatConfig.java:23) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:19) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:24) at org.apache.hive.druid.io.druid.jackson.JodaStuff.register(JodaStuff.java:54) at org.apache.hive.druid.io.druid.jackson.DruidDefaultSerializersModule.<init>(DruidDefaultSerializersModule.java:49) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:46) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:35) at org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.<clinit>(DruidStorageHandlerUtils.java:227) at org.apache.hadoop.hive.druid.io.DruidOutputFormat.getHiveRecordWriter(DruidOutputFormat.java:95) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:297) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:786) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:737) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:903) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111) at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:490) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:392) ... 18 more , errorMessage=Cannot recover from this error:java.lang.RuntimeException: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:401) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:249) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.createUTC(FormatConfig.java:71) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.<clinit>(FormatConfig.java:23) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:19) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:24) at org.apache.hive.druid.io.druid.jackson.JodaStuff.register(JodaStuff.java:54) at org.apache.hive.druid.io.druid.jackson.DruidDefaultSerializersModule.<init>(DruidDefaultSerializersModule.java:49) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:46) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:35) at org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.<clinit>(DruidStorageHandlerUtils.java:227) at org.apache.hadoop.hive.druid.io.DruidOutputFormat.getHiveRecordWriter(DruidOutputFormat.java:95) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:297) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:786) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:737) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:903) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111) at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:490) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:392) ... 18 more ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:0, Vertex vertex_1562408971899_20433_98_01 [Reducer 2] killed/failed due to:OWN_TASK_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 INFO : Completed executing command(queryId=hive_20190711193731_3b4cf0a3-b3a3-4701-a1f3-5d8839b18ee7); Time taken: 0.596 seconds Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Reducer 2, vertexId=vertex_1562408971899_20433_98_01, diagnostics=[Task failed, taskId=task_1562408971899_20433_98_01_000007, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.RuntimeException: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:401) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:249) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.createUTC(FormatConfig.java:71) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.<clinit>(FormatConfig.java:23) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:19) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:24) at org.apache.hive.druid.io.druid.jackson.JodaStuff.register(JodaStuff.java:54) at org.apache.hive.druid.io.druid.jackson.DruidDefaultSerializersModule.<init>(DruidDefaultSerializersModule.java:49) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:46) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:35) at org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.<clinit>(DruidStorageHandlerUtils.java:227) at org.apache.hadoop.hive.druid.io.DruidOutputFormat.getHiveRecordWriter(DruidOutputFormat.java:95) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:297) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:786) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:737) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:903) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111) at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:490) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:392) ... 18 more , errorMessage=Cannot recover from this error:java.lang.RuntimeException: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:401) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:249) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodError: org.joda.time.format.DateTimeFormatter.withZoneUTC()Lorg/joda/time/format/DateTimeFormatter; at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.createUTC(FormatConfig.java:71) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.cfg.FormatConfig.<clinit>(FormatConfig.java:23) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:19) at org.apache.hive.druid.com.fasterxml.jackson.datatype.joda.deser.PeriodDeserializer.<init>(PeriodDeserializer.java:24) at org.apache.hive.druid.io.druid.jackson.JodaStuff.register(JodaStuff.java:54) at org.apache.hive.druid.io.druid.jackson.DruidDefaultSerializersModule.<init>(DruidDefaultSerializersModule.java:49) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:46) at org.apache.hive.druid.io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:35) at org.apache.hadoop.hive.druid.DruidStorageHandlerUtils.<clinit>(DruidStorageHandlerUtils.java:227) at org.apache.hadoop.hive.druid.io.DruidOutputFormat.getHiveRecordWriter(DruidOutputFormat.java:95) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:297) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:282) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:786) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:737) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:903) at org.apache.hadoop.hive.ql.exec.vector.VectorFileSinkOperator.process(VectorFileSinkOperator.java:111) at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:158) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.processVectorGroup(ReduceRecordSource.java:490) at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecordVector(ReduceRecordSource.java:392) ... 18 more ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:0, Vertex vertex_1562408971899_20433_98_01 [Reducer 2] killed/failed due to:OWN_TASK_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 (state=08S01,code=2)
... View more
Labels:
02-11-2019
02:31 PM
1:
jdbc:hive2://devhcdl2.azure.ril.com:2181 ,CREATE EXTERNAL TABLE druid_5 STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ("druid.datasource" ="meetup10");
INFO : Compiling command(queryId=hive_20190211131220_f397f9c3-6a76-4eae-b4ab-7344015d381e): CREATE EXTERNAL TABLE druid_5 STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ("druid.datasource" ="meetup10")
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20190211131220_f397f9c3-6a76-4eae-b4ab-7344015d381e); Time taken: 0.053 seconds
INFO : Executing command(queryId=hive_20190211131220_f397f9c3-6a76-4eae-b4ab-7344015d381e): CREATE EXTERNAL TABLE druid_5 STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' TBLPROPERTIES ("druid.datasource" ="meetup10")
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException Connected to Druid but could not retrieve datasource information)
INFO : Completed executing command(queryId=hive_20190211131220_f397f9c3-6a76-4eae-b4ab-7344015d381e); Time taken: 0.031 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException Connected to Druid but could not retrieve datasource information) (state=08S01,code=1)
Please help me on this
... View more
Labels:
- Labels:
-
Apache Hive
02-08-2019
02:58 PM
Hi Team, We are not able to restart our cluster when we change some configuration properties. Please find below logs, Can one help on this ..... stderr: /var/lib/ambari-agent/data/errors-9321.txt 2019-01-31 09:50:55,850 - Could not determine stack version for component hbase-master by calling '/usr/bin/hdp-select status hbase-master > /tmp/tmpZhqa5e'. Return Code: 1, Output: .
2019-01-31 09:50:55,966 - Could not determine stack version for component hbase-master by calling '/usr/bin/hdp-select status hbase-master > /tmp/tmpxw6jiA'. Return Code: 1, Output: .
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 992, in restart
self.status(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HBASE/package/scripts/hbase_master.py", line 106, in status
check_process_status(status_params.hbase_master_pid_file)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py", line 43, in check_process_status
raise ComponentIsNotRunning()
ComponentIsNotRunning
The above exception was the cause of the following exception:
2019-01-31 09:50:59,471 - Could not determine stack version for component hbase-master by calling '/usr/bin/hdp-select status hbase-master > /tmp/tmpmmubZi'. Return Code: 1, Output: .
2019-01-31 09:50:59,503 - The 'hbase-master' component did not advertise a version. This may indicate a problem with the component packaging.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HBASE/package/scripts/hbase_master.py", line 170, in <module>
HbaseMaster().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 1003, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HBASE/package/scripts/hbase_master.py", line 87, in start
self.configure(env) # for security
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HBASE/package/scripts/hbase_master.py", line 45, in configure
hbase(name='master')
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HBASE/package/scripts/hbase.py", line 224, in hbase
owner=params.hbase_user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 672, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 669, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 360, in action_delayed
main_resource.kinit()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 701, in kinit
user=user
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hcdl_dev@DEVHCDLRIL.COM' returned 1. kinit: Pre-authentication failed: Permission denied while getting initial credentials stdout: /var/lib/ambari-agent/data/output-9321.txt 2019-01-31 09:50:55,039 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187
2019-01-31 09:50:55,060 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-01-31 09:50:55,376 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187
2019-01-31 09:50:55,382 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-01-31 09:50:55,385 - Group['livy'] {}
2019-01-31 09:50:55,386 - Group['spark'] {}
2019-01-31 09:50:55,386 - Group['ranger'] {}
2019-01-31 09:50:55,386 - Group['hdfs'] {}
2019-01-31 09:50:55,387 - Group['zeppelin'] {}
2019-01-31 09:50:55,387 - Group['hadoop'] {}
2019-01-31 09:50:55,387 - Group['users'] {}
2019-01-31 09:50:55,387 - Group['knox'] {}
2019-01-31 09:50:55,388 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,390 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,392 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,393 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,395 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,396 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-31 09:50:55,398 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,399 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,400 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-01-31 09:50:55,402 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-31 09:50:55,403 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2019-01-31 09:50:55,405 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-01-31 09:50:55,406 - User['druid'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,408 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-01-31 09:50:55,409 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-01-31 09:50:55,410 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,412 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-01-31 09:50:55,413 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,415 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,416 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,418 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-31 09:50:55,419 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'knox'], 'uid': None}
2019-01-31 09:50:55,420 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-31 09:50:55,422 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-01-31 09:50:55,428 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-01-31 09:50:55,429 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-01-31 09:50:55,430 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-31 09:50:55,431 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-31 09:50:55,432 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-01-31 09:50:55,441 - call returned (0, '1016')
2019-01-31 09:50:55,442 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1016'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-01-31 09:50:55,448 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1016'] due to not_if
2019-01-31 09:50:55,449 - Group['hdfs'] {}
2019-01-31 09:50:55,449 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-01-31 09:50:55,450 - FS Type: HDFS
2019-01-31 09:50:55,450 - Directory['/etc/hadoop'] {'mode': 0755}
2019-01-31 09:50:55,466 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2019-01-31 09:50:55,467 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-01-31 09:50:55,490 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-01-31 09:50:55,499 - Skipping Execute[('setenforce', '0')] due to not_if
2019-01-31 09:50:55,499 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-01-31 09:50:55,502 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-01-31 09:50:55,502 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2019-01-31 09:50:55,503 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-01-31 09:50:55,507 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2019-01-31 09:50:55,509 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2019-01-31 09:50:55,515 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-01-31 09:50:55,527 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2019-01-31 09:50:55,527 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-01-31 09:50:55,528 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-01-31 09:50:55,533 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-01-31 09:50:55,537 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-01-31 09:50:55,542 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
2019-01-31 09:50:55,773 - The unlimited key JCE policy is required, and appears to have been installed.
2019-01-31 09:50:55,850 - Could not determine stack version for component hbase-master by calling '/usr/bin/hdp-select status hbase-master > /tmp/tmpZhqa5e'. Return Code: 1, Output: .
2019-01-31 09:50:55,851 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-01-31 09:50:55,876 - call returned (0, '2.6.4.0-91\n3.0.1.0-187')
2019-01-31 09:50:55,966 - Could not determine stack version for component hbase-master by calling '/usr/bin/hdp-select status hbase-master > /tmp/tmpxw6jiA'. Return Code: 1, Output: .
2019-01-31 09:50:55,967 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-01-31 09:50:55,993 - call returned (0, '2.6.4.0-91\n3.0.1.0-187')
2019-01-31 09:50:56,355 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=3.0.1.0-187 -> 3.0.1.0-187
2019-01-31 09:50:56,369 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-01-31 09:50:56,374 - checked_call['hostid'] {}
2019-01-31 09:50:56,379 - checked_call returned (0, '1a0a8925')
2019-01-31 09:50:56,388 - Execute['/usr/hdp/current/hbase-master/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-master/conf stop master'] {'only_if': 'ambari-sudo.sh -H -E test -f /var/run/hbase/hbase-hbase-master.pid && ps -p `ambari-sudo.sh -H -E cat /var/run/hbase/hbase-hbase-master.pid` >/dev/null 2>&1', 'on_timeout': '! ( ambari-sudo.sh -H -E test -f /var/run/hbase/hbase-hbase-master.pid && ps -p `ambari-sudo.sh -H -E cat /var/run/hbase/hbase-hbase-master.pid` >/dev/null 2>&1 ) || ambari-sudo.sh -H -E kill -9 `ambari-sudo.sh -H -E cat /var/run/hbase/hbase-hbase-master.pid`', 'timeout': 30, 'user': 'hbase'}
2019-01-31 09:50:58,398 - File['/var/run/hbase/hbase-hbase-master.pid'] {'action': ['delete']}
2019-01-31 09:50:58,399 - Pid file /var/run/hbase/hbase-hbase-master.pid is empty or does not exist
2019-01-31 09:50:58,406 - Directory['/etc/hbase'] {'mode': 0755}
2019-01-31 09:50:58,406 - Directory['/usr/hdp/current/hbase-master/conf'] {'owner': 'hbase', 'group': 'hadoop', 'create_parents': True}
2019-01-31 09:50:58,407 - Directory['/tmp'] {'create_parents': True, 'mode': 0777}
2019-01-31 09:50:58,407 - Changing permission for /tmp from 1777 to 777
2019-01-31 09:50:58,408 - Directory['/tmp'] {'create_parents': True, 'cd_access': 'a'}
2019-01-31 09:50:58,409 - Execute[('chmod', '1777', u'/tmp')] {'sudo': True}
2019-01-31 09:50:58,419 - XmlConfig['hbase-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-master/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-01-31 09:50:58,436 - Generating config: /usr/hdp/current/hbase-master/conf/hbase-site.xml
2019-01-31 09:50:58,436 - File['/usr/hdp/current/hbase-master/conf/hbase-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-31 09:50:58,489 - File['/usr/hdp/current/hbase-master/conf/hdfs-site.xml'] {'action': ['delete']}
2019-01-31 09:50:58,489 - File['/usr/hdp/current/hbase-master/conf/core-site.xml'] {'action': ['delete']}
2019-01-31 09:50:58,489 - XmlConfig['hbase-policy.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-master/conf', 'configuration_attributes': {}, 'configurations': {u'security.admin.protocol.acl': u'*', u'security.masterregion.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}}
2019-01-31 09:50:58,498 - Generating config: /usr/hdp/current/hbase-master/conf/hbase-policy.xml
2019-01-31 09:50:58,498 - File['/usr/hdp/current/hbase-master/conf/hbase-policy.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-01-31 09:50:58,508 - File['/usr/hdp/current/hbase-master/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'hbase', 'group': 'hadoop'}
2019-01-31 09:50:58,509 - Writing File['/usr/hdp/current/hbase-master/conf/hbase-env.sh'] because contents don't match
2019-01-31 09:50:58,509 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-01-31 09:50:58,512 - File['/etc/security/limits.d/hbase.conf'] {'content': Template('hbase.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-01-31 09:50:58,513 - TemplateConfig['/usr/hdp/current/hbase-master/conf/hadoop-metrics2-hbase.properties'] {'owner': 'hbase', 'template_tag': 'GANGLIA-MASTER'}
2019-01-31 09:50:58,521 - File['/usr/hdp/current/hbase-master/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties-GANGLIA-MASTER.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2019-01-31 09:50:58,522 - Writing File['/usr/hdp/current/hbase-master/conf/hadoop-metrics2-hbase.properties'] because contents don't match
2019-01-31 09:50:58,522 - TemplateConfig['/usr/hdp/current/hbase-master/conf/regionservers'] {'owner': 'hbase', 'template_tag': None}
2019-01-31 09:50:58,524 - File['/usr/hdp/current/hbase-master/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2019-01-31 09:50:58,525 - TemplateConfig['/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf'] {'owner': 'hbase', 'template_tag': None}
2019-01-31 09:50:58,527 - File['/usr/hdp/current/hbase-master/conf/hbase_master_jaas.conf'] {'content': Template('hbase_master_jaas.conf.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2019-01-31 09:50:58,528 - Directory['/var/run/hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-01-31 09:50:58,528 - Directory['/var/log/hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-01-31 09:50:58,532 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-01-31 09:50:58,533 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hbase.json
2019-01-31 09:50:58,533 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hbase.json'] {'content': Template('input.config-hbase.json.j2'), 'mode': 0644}
2019-01-31 09:50:58,537 - File['/usr/hdp/current/hbase-master/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hbase', 'group': 'hadoop', 'mode': 0644}
2019-01-31 09:50:58,537 - HdfsResource['/apps/hbase/data'] {'security_enabled': True, 'hadoop_bin_dir': '/usr/hdp/3.0.1.0-187/hadoop/bin', 'keytab': '/etc/security/keytabs/hdfs.headless.keytab', 'dfs_type': 'HDFS', 'default_fs': 'hdfs://devcluster', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'hdfs-hcdl_dev@DEVHCDLRIL.COM', 'user': 'hdfs', 'owner': 'hbase', 'hadoop_conf_dir': '/usr/hdp/3.0.1.0-187/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/mr-history/done', u'/warehouse/tablespace/managed/hive', u'/warehouse/tablespace/external/hive', u'/app-logs', u'/tmp']}
2019-01-31 09:50:58,539 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hcdl_dev@DEVHCDLRIL.COM'] {'user': 'hdfs'}
2019-01-31 09:50:59,471 - Could not determine stack version for component hbase-master by calling '/usr/bin/hdp-select status hbase-master > /tmp/tmpmmubZi'. Return Code: 1, Output: .
2019-01-31 09:50:59,472 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-01-31 09:50:59,502 - call returned (0, '2.6.4.0-91\n3.0.1.0-187')
2019-01-31 09:50:59,503 - The 'hbase-master' component did not advertise a version. This may indicate a problem with the component packaging.
Command failed after 1 tries
... View more
- Tags:
- Hadoop Core
- HBase
- Hive
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
01-21-2019
01:41 PM
Thanks for update. I am using HDP 3.0.1 version , Please suggest me for this cluster environment. Thanks
... View more
01-20-2019
05:24 PM
Can any one help me on this issue, What steps i need check.
... View more
01-20-2019
05:23 PM
result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'usermod -G hadoop -g hadoop superset' returned 6. usermod: user 'superset' does not exist in /etc/passwd
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-9135.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-9135.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/hook.py', 'START', '/var/lib/ambari-agent/data/command-9135.json', '/var/lib/ambari-agent/cache/stack-hooks/before-START', '/var/lib/ambari-agent/data/structured-out-9135.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2019-01-20 10:56:41,805 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-20 10:56:41,806 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-20 10:56:41,808 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-20 10:56:41,809 - Modifying user superset
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-9135.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-9135.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-START/scripts/hook.py', 'START', '/var/lib/ambari-agent/data/command-9135.json', '/var/lib/ambari-agent/cache/stack-hooks/before-START', '/var/lib/ambari-agent/data/structured-out-9135.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari