Member since
01-23-2018
70
Posts
3
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1039 | 01-16-2019 10:05 AM | |
1171 | 10-23-2018 03:40 PM | |
2892 | 10-19-2018 10:30 AM | |
620 | 10-16-2018 12:58 PM | |
2226 | 05-28-2018 06:34 AM |
10-29-2019
04:48 AM
do you have any news about a problem? i have the same.
... View more
10-11-2019
08:12 AM
what does it mean? Caused by: java.lang.NoSuchMethodError : com.fasterxml.jackson.databind.util.BeanUtil.okNameForGetter (Lcom/fasterxml/jackson/databind/introspect/AnnotatedMethod;Z)Ljava/lang/String;
... View more
10-11-2019
06:58 AM
Hello.
Can anyone help me to understand why my Sqoop job don't work?
I have Sqoop job with this error:
2019-10-11T14:36:05,566 ERROR [Job ATS Event Dispatcher] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Exception while publishing configs on JOB_SUBMITTED Event for the job : job_1570791277869_0002
org.apache.hadoop.yarn.exceptions.YarnException : Failed while publishing entity
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher.dispatchEntities(TimelineV2ClientImpl.java:548) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putEntities(TimelineV2ClientImpl.java:149) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.publishConfigsOnJobSubmittedEvent(JobHistoryEventHandler.java:1254) [hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processEventForNewTimelineService(JobHistoryEventHandler.java:1414) [hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleTimelineEvent(JobHistoryEventHandler.java:742) [hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.access$1200(JobHistoryEventHandler.java:93) [hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(JobHistoryEventHandler.java:1795) [hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$ForwardingEventHandler.handle(JobHistoryEventHandler.java:1791) [hadoop-mapreduce-client-app-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) [hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) [hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_222]
Caused by: java.lang.NoSuchMethodError : com.fasterxml.jackson.databind.util.BeanUtil.okNameForGetter (Lcom/fasterxml/jackson/databind/introspect/AnnotatedMethod;Z)Ljava/lang/String;
at com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.findNameForSerialization(JaxbAnnotationIntrospector.java:937) ~[jackson-module-jaxb-annotations-2.9.5.jar:2.9.5]
at com.fasterxml.jackson.databind.introspect.POJOPropertiesCollector._addGetterMethod(POJOPropertiesCollector.java:519) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.introspect.POJOPropertiesCollector._addMethods(POJOPropertiesCollector.java:482) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.introspect.POJOPropertiesCollector.collect(POJOPropertiesCollector.java:234) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.introspect.BasicClassIntrospector.collectProperties(BasicClassIntrospector.java:142) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.introspect.BasicClassIntrospector.forSerialization(BasicClassIntrospector.java:68) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.introspect.BasicClassIntrospector.forSerialization(BasicClassIntrospector.java:11) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.SerializationConfig.introspect(SerializationConfig.java:530) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:133) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.SerializerProvider._createUntypedSerializer(SerializerProvider.java:1077) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.SerializerProvider._createAndCacheUntypedSerializer(SerializerProvider.java:1037) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.SerializerProvider.findValueSerializer(SerializerProvider.java:445) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.SerializerProvider.findTypedValueSerializer(SerializerProvider.java:599) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:93) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:610) ~[kite-data-mapreduce-1.0.0.3.1.0.0-78.jar:?]
at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:625) ~[jackson-jaxrs-base-2.9.5.jar:2.9.5]
at com.sun.jersey.api.client.RequestWriter.writeRequestEntity(RequestWriter.java:300) ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:217) ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153) ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.Client.handle(Client.java:652) ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) ~[jersey-client-1.19.jar:1.19]
at com.sun.jersey.api.client.WebResource$Builder.put(WebResource.java:539) ~[jersey-client-1.19.jar:1.19]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.doPutObjects(TimelineV2ClientImpl.java:291) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.access$000(TimelineV2ClientImpl.java:66) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$1.run(TimelineV2ClientImpl.java:302) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$1.run(TimelineV2ClientImpl.java:299) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_222]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_222]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) ~[hadoop-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(TimelineV2ClientImpl.java:299) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl.putObjects(TimelineV2ClientImpl.java:251) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$EntitiesHolder$1.call(TimelineV2ClientImpl.java:374) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$EntitiesHolder$1.call(TimelineV2ClientImpl.java:367) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_222]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher$1.publishWithoutBlockingOnQueue(TimelineV2ClientImpl.java:495) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at org.apache.hadoop.yarn.client.api.impl.TimelineV2ClientImpl$TimelineEntityDispatcher$1.run(TimelineV2ClientImpl.java:433) ~[hadoop-yarn-common-3.1.1.3.1.0.0-78.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_222]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_222]
... 1 more
... View more
Labels:
- Labels:
-
Apache Sqoop
07-31-2019
11:22 AM
Thanks it solved problem, but how did you get answer for this problem? I will never guess himself.
... View more
04-09-2019
05:18 PM
It works perfectly, thanks.
... View more
04-09-2019
10:32 AM
Hello, I try to configure ranger HDFS Audit to local file from this instruction https://cwiki.apache.org/confluence/display/RANGER/Ranger+0.5+Audit+Configuration My setting: xasecure.audit.log4j.is.enabled=true xasecure.audit.destination.log4j=true xasecure.audit.destination.log4j.logger=xaaudit ranger.logger=INFO,console,RANGERAUDIT log4j.logger.xaaudit=${ranger.logger} log4j.appender.RANGERAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.RANGERAUDIT.File=/tmp/ranger_hdfs_audit.log log4j.appender.RANGERAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.RANGERAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %L %m%n log4j.appender.RANGERAUDIT.DatePattern=.yyyy-MM-dd I have an error into console when enter any command under not hdfs user. For example: hdfs dfs -ls / log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /tmp/ranger_hdfs_audit.log (Permission denied) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at java.io.FileOutputStream.<init>(FileOutputStream.java:133) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768) at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.apache.log4j.Logger.getLogger(Logger.java:104) at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262) at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025) at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:790) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657) at org.apache.hadoop.fs.FsShell.<clinit>(FsShell.java:43) log4j:ERROR Either File or DatePattern options are not set for appender [RANGERAUDIT].
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
03-26-2019
11:54 AM
How to denied use rm with -skiptrash on HDFS? I want to block any attempts to delete a file with the -skiptrash option.
... View more
Labels:
- Labels:
-
Apache Hadoop
03-12-2019
01:55 PM
I have database replica and physical replication slot. When replica was down, postgers began to save wal journal and them was save all month! When i drop replication slot and clear old wal journal the cpu utilisation return to normal.
... View more
03-12-2019
12:03 PM
@Jay Kumar SenSharma I already cleaned ambari database --from-date 2019-01-01 now it size is 1200mb, but i have databases "oozie, ranger, hive, airflow" on the same postgres cluster. Should i clean historical data from those databases? HDP cluster is two years old. The most big base now is oozie 12GB
... View more
03-12-2019
09:33 AM
@Jay Kumar SenSharma Thank you for your answer. I have this "time requests" time curl -i -u admin:* -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/clusters/DataLake real 0m36.270s user 0m0.020s sys 0m0.184s time curl -i -u admin:* -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/clusters/DataLake?fields=Clusters/desired_configs real 0m0.190s user 0m0.003s sys 0m0.013s time curl -i -u admin:* -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/clusters/DataLake?fields=Clusters/health_report,Clusters/total_hosts,alerts_summary_hosts real 0m30.498s user 0m0.002s sys 0m0.011s I think something with database time response, isn't it?
... View more
03-12-2019
08:44 AM
Hello, Can anyone explain how to rise speed of "rest api"? Last month, the CPU load began to grow on ambari server and appeared alert "Ambari server performance rest api Critical 25,000ms" I have 4 CPU core and 32GB ram on ambari host The cluster is contain 25 nodes
... View more
- Tags:
- ambari-server
Labels:
- Labels:
-
Apache Ambari
02-28-2019
05:23 PM
Thank you for the very helpful article.
... View more
02-05-2019
01:51 PM
@Geoffrey Shelton Okot I tried to install airflow by airflow-mpack but it contain many bugs. I installed airflow and generated keytab by myself it works fine. I found some information on airflow issue tracker about this problem: https://issues.apache.org/jira/browse/AIRFLOW-3486 In fact lineage transmission isn't working now.
... View more
02-04-2019
04:11 PM
@Geoffrey Shelton Okot Thank you, you are right, the problem was really in zookeeper's acl. I copied everything in "ZooKeeper directory" from Test cluster to Dev cluster and that was help. But i don't know what exactly permission affected it. Is something way to get list all acl permission by Zookeeper? I would like to compare it with all acl from both cluster.
... View more
02-04-2019
09:05 AM
Did anyone try to configure Airflow for sending lineage metadata to Atlas? I try to configure by this instruction: https://airflow.apache.org/lineage.html But in Atlas web ui I see nothing about Airflow when try to search by type. I have kerberized cluster HDP 2.6.5 and Airflow 1.10.2
... View more
Labels:
- Labels:
-
Apache Atlas
02-01-2019
08:26 AM
Hello, Did you resolve your problem? Does Ranger official support integration with Syslog? Do you have some manual how to do that?
... View more
01-23-2019
01:17 PM
@Geoffrey Shelton Okot Yes i have ls /config/topics [test1] NiFiClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/nifi.service.keytab" storeKey=true useTicketCache=false principal="nifi/host@RAIFFEISEN.RU"; }; RegistryClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/nifi.service.keytab" storeKey=true useTicketCache=false principal="nifi/host@RAIFFEISEN.RU";
... View more
01-23-2019
09:45 AM
@Geoffrey Shelton Okot Can i remove zookeeper's trees? Will they be recreate after deleting with right permission?
... View more
01-23-2019
08:44 AM
@Geoffrey Shelton Okot I compared files, yes their exists and the same.
... View more
01-23-2019
07:47 AM
@Geoffrey Shelton Okot I set next acl on Dev cluster: [zk: localhost:2181(CONNECTED) 1] getAcl /brokers
'world,'anyone
: cdrwa
'sasl,'kafka
: cdrwa
[zk: localhost:2181(CONNECTED) 2] getAcl /controller
'world,'anyone
: r
'sasl,'kafka
: cdrwa
[zk: localhost:2181(CONNECTED) 3] getAcl /config
'world,'anyone
: cdrwa
'sasl,'kafka
: cdrwa
[zk: localhost:2181(CONNECTED) 4] getAcl /config/topics
'world,'anyone
: cdrwa
'sasl,'kafka
: cdrwa
but kafka-console-consumer.sh --bootstrap-server still not work.
... View more
01-23-2019
06:44 AM
@Geoffrey Shelton Okot Working Test: [zk: localhost:2181(CONNECTED) 0] getAcl /config/topics
'world,'anyone
: r
'sasl,'kafka
: cdrwa
[zk: localhost:2181(CONNECTED) 1] Not working Dev: [zk: localhost:2181(CONNECTED) 0] getAcl /config/topics
'world,'anyone
: cdrwa
[zk: localhost:2181(CONNECTED) 1]
... View more
01-22-2019
06:21 PM
@Geoffrey Shelton Okot It's real case at my work. I configured Test cluster from Hortonworks documentation and everything works good but at the same time Development cluster with same configuration doesn't work and i don't understand why it is. I don't have lab or some special instruction but i can to show you anything my config files or screenshots. Is it maybe some problem with zookeeper?
... View more
01-22-2019
03:23 PM
@Geoffrey Shelton Okot I use HDF on HDP as one cluster, yes is kerberized. Ambari-2.6.2.2, HDP-2.6.5, HDF-3.1.2.
... View more
01-22-2019
02:53 PM
I have the same problem. Did you solve it?
... View more
01-22-2019
02:34 PM
I can't connect with --bootstrap-server key, only with --zookeeper works. Obviously "ConsumeKafka processor" use "bootstrap" mod. Can i use zookeeper with "ConsumeKafka processor" or how can i debug why i can't to connect direct to broker? kafka-console-consumer.sh --bootstrap-server server:6667 --topic test5 --from-beginning --security-protocol SASL_PLAINTEXT
don't work kafka-console-consumer.sh --zookeeper server:2181--topic test5 --from-beginning --security-protocol SASL_PLAINTEXT
work fine
... View more
01-17-2019
02:54 PM
Hello, I have enable kerberos on cluster. I can successfully connect to consumer through kafka-konsole-consumer. But when i try to connect to topic through NiFi ConsumeKafka processor i have error: WARN [Timer-Driven Process Thread-8] o.a.n.p.kafka.pubsub.ConsumeKafka_1_0 ConsumeKafka_1_0[id=504e5811-0168-1000-0000-000024c83cc5] Was interrupted while trying to communicate with Kafka with lease org.apache.nifi.processors.kafka.pubsub.ConsumerPool$SimpleConsumerLease@51911c24. Will roll back session and discard any partially received data. WARN [kafka-kerberos-refresh-thread-nifi/*@*] o.a.k.c.security.kerberos.KerberosLogin [Principal=nifi/*@*]: TGT renewal thread has been interrupted and will exit.
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
01-16-2019
10:05 AM
My stupid mistake, i forgot what i had enabled manual commit in Dbeaver in production server and "upgrade_id" cell didn't update.
... View more
01-15-2019
02:59 PM
Maybe is that problem because one item in last update was failed? {
"href" : "http://*:8080/api/v1/clusters/DataLake/upgrades/865/upgrade_groups/434/upgrade_items/140",
"UpgradeItem" : {
"cluster_name" : "DataLake",
"command_params" : "{\"upgrade_pack\":\"nonrolling-upgrade-2.6\",\"clusterName\":\"DataLake\",\"upgrade_direction\":\"upgrade\",\"forceRefreshConfigTagsBeforeExecution\":\"true\",\"upgrade_type\":\"nonrolling_upgrade\",\"request_id\":\"865\"}",
"context" : "Save Cluster State",
"display_status" : "FAILED",
"end_time" : -1,
"group_id" : 434,
"host_params" : "{\"host_sys_prepped\":\"false\",\"agent_stack_retry_count\":\"5\",\"java_home\":\"/etc/alternatives/java_sdk\",\"jdk_location\":\"http://*.*.ru:8080/resources/\",\"stack_version\":\"2.6\",\"custom_postgres_jdbc_name\":\"postgresql-42.0.0.jar\",\"mysql_jdbc_url\":\"http://*:8080/resources//mysql-connector-java.jar\",\"oracle_jdbc_url\":\"http://*:8080/resources//ojdbc6.jar\",\"ambari_db_rca_password\":\"SECRET\",\"db_name\":\"ambari\",\"ambari_db_rca_driver\":\"org.postgresql.Driver\",\"ambari_db_rca_username\":\"ambari\",\"java_version\":\"8\",\"previous_custom_postgres_jdbc_name\":\"postgresql-42.0.0.jar\",\"not_managed_hdfs_path_list\":\"[\\\"/data/apps/hive/warehouse\\\",\\\"/data/app-logs\\\",\\\"/data/mr-history/done\\\",\\\"/tmp\\\"]\",\"db_driver_filename\":\"mysql-connector-java.jar\",\"gpl_license_accepted\":\"false\",\"agent_stack_retry_on_unavailability\":\"false\",\"ambari_db_rca_url\":\"jdbc:postgresql://*:5432/ambari\",\"stack_name\":\"HDP\"}",
"log_info" : null,
"progress_percent" : 100.0,
"request_id" : 865,
"skippable" : true,
"stage_id" : 140,
"start_time" : 1526474555466,
"status" : "COMPLETED",
"text" : "Save Cluster State"
},
"tasks" : [
{
"href" : "http://*:8080/api/v1/clusters/DataLake/upgrades/865/upgrade_groups/434/upgrade_items/140/tasks/11045",
"Tasks" : {
"cluster_name" : "DataLake",
"id" : 11045,
"request_id" : 865,
"stage_id" : 140
}
}
]
}
... View more