Member since
11-07-2017
43
Posts
3
Kudos Received
0
Solutions
02-07-2019
06:04 AM
@BALASWAMI BANDI, I haven't got a hands-on configuring WebHDFS. I usually refer to Hortonworks documents link for any steps related to HDP distribution. I'm not sure about webHDFS messing with files view, my guess is that files view access's the URL/path configured with 'fs.defaultFS' property.
... View more
02-06-2019
05:32 AM
@BALASWAMI BANDI, Please follow the below link and configure HDFS proxy user for root user. Once you configure and restart the required services, the above error should get fixed. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-views/content/setup_HDFS_proxy_user.html PS: I'm not sure if Ambari Files view supports ADLS store and has the necessary jars to make it work. Anyways the above link should get your error fixed. Please let me know how it goes. Thanks, Cibi
... View more
12-06-2018
09:22 AM
@Mark Lewis I was able to enable Files view in HDInsight cluster and was able to access the files located in default wasb storage. You just have to place the files view archive(files-2.6.2.2.1.jar) under /var/lib/ambari-server/resources/views/ path in your ambari server host. Once you have placed the file in the path, head to 'Manage Ambari -> views' path in your Ambari web UI. Now you should find that files view have been enabled and an instance will be added by default. Head to ambari views -> files view and now you should be able to access your default wasb location files from the view. I had webhdfs enabled in my HDInsight cluster which i'm not sure if required or not. Please mark this as Answer if you find this useful. Thanks, Cibi
... View more
10-22-2018
07:24 AM
3 Kudos
@Marek Martofel Following the steps in below link fixed this issue: Enable System Service Mode On an Upgraded Cluster
... View more
10-18-2018
11:03 AM
@Akhil S Naik Thanks for the quick reply. The missing '.' was the issue and the web ui is up now.
... View more
10-18-2018
10:42 AM
After the error "Error:Could not find or load main class org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator", following the steps in below link fixed the issue. Enable System Service Mode On an Upgraded Cluster
... View more
10-18-2018
10:36 AM
Hi, I recently upgraded from HDP 2.6.3 to HDP 3.0.1.0-187 & Ambari from 2.6 to 2.7. After the upgrade Atlas UI is not coming up though the service is up and running fine, listening on https port 21443. There is no errors or warnings thrown in the application.log file. The below exception is from the .err log file. Anyone facing the same kind of issues in Altas after the upgrade? Any help would be appreciable. Atlas version is 1.0.0 Exception in thread "main" org.apache.atlas.exception.AtlasBaseException: EmbeddedServer.Start: failed!
at org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:101)
at org.apache.atlas.Atlas.main(Atlas.java:133)
Caused by: MultiException[java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0
*NULL.*
^, java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0
*NULL.*
^]
at org.eclipse.jetty.server.Server.doStart(Server.java:386)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.atlas.web.service.EmbeddedServer.start(EmbeddedServer.java:98)
... 1 more
Suppressed: java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0
*NULL.*
^
at java.util.regex.Pattern.error(Pattern.java:1957)
at java.util.regex.Pattern.sequence(Pattern.java:2125)
at java.util.regex.Pattern.expr(Pattern.java:1998)
at java.util.regex.Pattern.compile(Pattern.java:1698)
at java.util.regex.Pattern.<init>(Pattern.java:1351)
at java.util.regex.Pattern.compile(Pattern.java:1028)
at org.eclipse.jetty.util.ssl.SslContextFactory.removeExcludedCipherSuites(SslContextFactory.java:1204)
at org.eclipse.jetty.util.ssl.SslContextFactory.selectCipherSuites(SslContextFactory.java:1163)
at org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:320)
at org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:217)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:72)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:268)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:401)
... 3 more
Caused by: java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0
*NULL.*
^
at java.util.regex.Pattern.error(Pattern.java:1957)
at java.util.regex.Pattern.sequence(Pattern.java:2125)
at java.util.regex.Pattern.expr(Pattern.java:1998)
at java.util.regex.Pattern.compile(Pattern.java:1698)
at java.util.regex.Pattern.<init>(Pattern.java:1351)
at java.util.regex.Pattern.compile(Pattern.java:1028)
at org.eclipse.jetty.util.ssl.SslContextFactory.removeExcludedCipherSuites(SslContextFactory.java:1204)
at org.eclipse.jetty.util.ssl.SslContextFactory.selectCipherSuites(SslContextFactory.java:1163)
at org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:320)
at org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:217)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.SslConnectionFactory.doStart(SslConnectionFactory.java:72)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:268)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:401)
... 3 more
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Atlas
10-17-2018
12:47 PM
@Daniel Zafar Hi, I'm also facing the same error, were you able to find anything? # ./ambari-sudo.sh su yarn-ats -l -s /bin/bash -c 'export PATH='"'"'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/texlive/2016/bin/x86_64-linux:/usr/lib64/qt-3.3/bin:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/maven/bin:/root/bin:/opt/maven/bin:/opt/maven/bin:/var/lib/ambari-agent'"'"' ; sleep 10;export HBASE_CLASSPATH_PREFIX=/usr/hdp/3.0.1.0-187/hadoop-yarn/timelineservice/^*; /usr/hdp/3.0.1.0-187/hbase/bin/hbase --config /usr/hdp/3.0.1.0-187//hadoop/conf/embedded-yarn-ats-hbase org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator -Dhbase.client.retries.number=35 -create -s'
Error: Could not find or load main class org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator
... View more
09-12-2018
03:53 PM
Hi @mmolnar, Thanks for the workaround. Me too thought of editing the ownership of the cluster in CB DB to a LDAP user, but i wanted to know whether i can provide access to the cluster resources in cloudbreak for all users within an LDAP group. If there is no possible workaround for this feature, will wait for future releases to bring out this feature. Thanks again!
... View more
09-11-2018
09:36 AM
Hi @mmolnar, CB Version is 2.7.0.
... View more
08-31-2018
11:27 AM
Hi, I have configured Cloudbreak with LDAP user authentication and group Authorization. When i log in to Cloudbreak UI using one of the LDAP users, i'm not able to view any of the available cluster, credentials which i can access using only the default user configured during the initial cloudbreak configuration. Could anyone help me how i can change the ownership of the cluster, credentials to a LDAP group/list of users. Any help would be much appreciated. Thanks, Cibi
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
08-30-2018
03:07 PM
Hi All, In my case, i have configured my cluster with Kerberos, SSL & set the hive transport mode to http. Could you please let me know how the connection string should look like? I'm using the below connection string but still getting the same "HTTP Response code: 401 (state=08S01,code=0)" error: Note: I do have a valid kerberos ticket for accessing the service. # beeline -u "jdbc:hive2://xxx.xxx.local:10001/;principal=hive/xxx.xxx.local@xxx.LOCAL;ssl=true;sslTrustStore=/opt/hdp/security/pki/certs-new/jks/truststore.jks;trustStorePassword=changeit;transportMode=http;httpPath=cliservice" Connecting to jdbc:hive2://xxx.xxx.local:10001/;principal=hive/xxx.xxx.local@xxx.LOCAL;ssl=true;sslTrustStore=/opt/hdp/security/pki/certs-new/jks/truststore.jks;trustStorePassword=changeit;transportMode=http;httpPath=cliservice
18/08/30 13:11:33 [main]: ERROR jdbc.HiveConnection: Error opening session
org.apache.thrift.transport.TTransportException: HTTP Response code: 401
at org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:262)
at org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.hive.service.cli.thrift.TCLIService$Client.send_OpenSession(TCLIService.java:158)
at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:150)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:622)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:221)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:146)
at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:211)
at org.apache.hive.beeline.Commands.connect(Commands.java:1204)
at org.apache.hive.beeline.Commands.connect(Commands.java:1100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:54)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:998)
at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:717)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:779)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Error: Could not establish connection to jdbc:hive2://xxx.xxx.local:10001/;principal=hive/xxx.xxx.local@xxx.LOCAL;ssl=true;sslTrustStore=/opt/hdp/security/pki/certs-new/jks/truststore.jks;trustStorePassword=changeit;transportMode=http;httpPath=cliservice: HTTP Response code: 401 (state=08S01,code=0)
Beeline version 1.2.1000.2.6.5.0-292 by Apache Hive
0: jdbc:hive2://xxx.xxx.local:10001/ (closed)>
... View more
08-27-2018
12:11 PM
Hi, @Ronak bansal, Thanks for the suggestion. The property atlas.kafka.security.protocol is already set in Atlas and the issue was because of a typo in atlas.kafka.security.protocol configured at the Hive service end. Thanks, Cibi
... View more
08-24-2018
12:50 PM
Hi, I have setup two clusters (1.LLAP Cluster,2.Data Governance Cluster), i have enabled Atlas hook in the hive service available in LLAP Cluster where as Atlas, Kafka has been setup in Data Governance cluster. Both the cluster have Kerberos setup for authentication. There is an issue with Hive Atlas hook bridge which fails to create a producer to the Kafka broker on the Data Governance Cluster. I suspect the issue is because of the incorrect Producer properties during the producer creation. I have provided the needed configurations in the atlas-applcation.properties file through ambari, but still some of the configurations are not reflecting during kafka producer creation for example security.protocol property of the producer (security.inter.broker.protocol in kafka service set to PLAINTEXTSASL while the security.protocol property used by the hive atlas hook uses PLAINTEXT). I have also tried modifying the producer.properties file avaiable under /etc/kafka/2.6.5.0-292/0/ path. Still there is an issue with producer creation. Getting below error: 2018-08-24 10:21:10,441 DEBUG [kafka-producer-network-thread | producer-1]: network.Selector (LogContext.java:debug(189)) - [Producer clientId=producer-1] Connection with dgpoc-m2.xyz.local/10.4.0.18 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:538)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:482)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:748) Below are the kafka producer properties used by the hive atlas hook: 2018-08-24 10:21:10,090 INFO [HiveServer2-Background-Pool: Thread-144]: producer.ProducerConfig (AbstractConfig.java:logAll(223)) - ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [dgpoc-m2.xyz.local:6667]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer When referring to the below hortonworks doc link, i got to know the difference in security protocol and the error caused was similar to the one showed in the link. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-kafka-produce-events.html Any help will be much appreciated! Thanksm Cibi
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Hive
-
Apache Kafka
05-31-2018
06:39 AM
Hi @pdarvasi, Currently i'm using Cloudbreak 2.7.0, i have enabled LDAP authentication for Cloudbreak UI. I would like to know whether resources can shared in version 2.7.0 and if so could you please help me reference articles on how to setup resource sharing? Thanks in Advance!
... View more
04-03-2018
11:01 AM
Kafka broker configurations are as below: INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = -1
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
default.replication.factor = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 10000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.protocol.version = 0.10.1-IV2
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listeners = PLAINTEXT://localhost:6667
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.1-IV2
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 86400000
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 6667
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 10000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = localhost:2181 zookeeper.connection.timeout.ms = 25000
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
... View more
04-02-2018
11:57 AM
Hi, Ranger Tagsync policies are not working, with the below Kafka consumer commit error shown in the Ranger Tagsync log. Error: ERROR AtlasTagSource [Thread-7] - 187 Caught exception..:
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104)
at org.apache.atlas.kafka.AtlasKafkaConsumer.commit(AtlasKafkaConsumer.java:93)
at org.apache.ranger.tagsync.source.atlas.AtlasTagSource$ConsumerRunnable.run(AtlasTagSource.java:181)
at java.lang.Thread.run(Thread.java:748)
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Kafka
-
Apache Ranger
02-13-2018
12:23 PM
Hi, Cloudbreak is failing to upscale the cluster with the below error, we could see that the VM's are spin up and running in Azure portal under the resource group. Tried to find any leads on the error but couldn't find any, Any help would be appreciated. Error: /cbreak_cloudbreak_1 | 2018-02-13 07:22:48,811 [RxIoScheduler-12] log:55 INFO c.m.a.m.r.Deployments createOrUpdate (poll) - [owner:spring] [type:springLog] [id:] [name:] --> GET https://management.azure.com/subscriptions/<subscription_id>/resourcegroups/cbsandbox1/providers/Microsoft.Resources/deployments/cbsandbox1/operationStatuses/08586831004674626432?api-version=2016-09-01
/cbreak_cloudbreak_1 | 2018-02-13 07:22:48,946 [RxIoScheduler-12] log:55 INFO c.m.a.m.r.Deployments createOrUpdate (poll) - [owner:spring] [type:springLog] [id:] [name:] <-- 200 OK https://management.azure.com/subscriptions/<subscription_id>/resourcegroups/cbsandbox1/providers/Microsoft.Resources/deployments/cbsandbox1/operationStatuses/08586831004674626432?api-version=2016-09-01 (135 ms, 389-byte body)
/cbreak_cloudbreak_1 | 2018-02-13 07:22:48,960 [reactorDispatcher-40] accept:140 DEBUG c.s.c.c.f.Flow2Handler - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:springLog] [id:1] [name:cbllapdev30] flow control event arrived: key: UPSCALESTACKRESULT_ERROR, flowid: 08bd0afd-627c-42d4-9e36-4d8232961d61, payload: CloudPlatformResult{status=FAILED, statusReason='Could not upscale: cbsandbox1 ', errorDetails=com.sequenceiq.cloudbreak.cloud.exception.CloudConnectorException: Could not upscale: cbsandbox1 , request=UpscaleStackRequest{resourceList=[CloudResource{type=ARM_TEMPLATE, status=CREATED, name='cbsandbox1', reference='null', group='null', persistent='true'}]}}
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,110 [reactorDispatcher-40] execute:73 INFO c.s.c.c.f.AbstractAction - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:1] [name:cbllapdev30] Stack: 1, flow state: ADD_INSTANCES_STATE, phase: doExec, execution time 190 sec
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,111 [reactorDispatcher-40] handleStackUpscaleFailure:201 ERROR c.s.c.c.f.s.u.StackUpscaleService - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:1] [name:cbllapdev30] Exception during the upscale of stack
/cbreak_cloudbreak_1 | com.sequenceiq.cloudbreak.cloud.exception.CloudConnectorException: Could not upscale: cbsandbox1
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.azure.AzureResourceConnector.upscale(AzureResourceConnector.java:220)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.UpscaleStackHandler.accept(UpscaleStackHandler.java:64)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.UpscaleStackHandler.accept(UpscaleStackHandler.java:31)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.UpscaleStackHandler$FastClassBySpringCGLIB$4d3f5688.invoke(<generated>)
/cbreak_cloudbreak_1 | at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.adapter.MethodBeforeAdviceInterceptor.invoke(MethodBeforeAdviceInterceptor.java:52)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
/cbreak_cloudbreak_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.UpscaleStackHandler$EnhancerBySpringCGLIB$78e983cb.accept(<generated>)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus$3.accept(EventBus.java:317)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus$3.accept(EventBus.java:310)
/cbreak_cloudbreak_1 | at reactor.bus.routing.ConsumerFilteringRouter.route(ConsumerFilteringRouter.java:72)
/cbreak_cloudbreak_1 | at reactor.bus.routing.TraceableDelegatingRouter.route(TraceableDelegatingRouter.java:51)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus.accept(EventBus.java:591)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus.accept(EventBus.java:63)
/cbreak_cloudbreak_1 | at reactor.core.dispatch.AbstractLifecycleDispatcher.route(AbstractLifecycleDispatcher.java:160)
/cbreak_cloudbreak_1 | at reactor.core.dispatch.MultiThreadDispatcher$MultiThreadTask.run(MultiThreadDispatcher.java:74)
/cbreak_cloudbreak_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
/cbreak_cloudbreak_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
/cbreak_cloudbreak_1 | at java.lang.Thread.run(Thread.java:745)
/cbreak_cloudbreak_1 | Caused by: com.microsoft.azure.CloudException: Async operation failed with provisioning state: Failed
/cbreak_cloudbreak_1 | at com.microsoft.azure.AzureClient$1$2.call(AzureClient.java:187)
/cbreak_cloudbreak_1 | at com.microsoft.azure.AzureClient$1$2.call(AzureClient.java:176)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
/cbreak_cloudbreak_1 | at rx.internal.producers.SingleProducer.request(SingleProducer.java:65)
/cbreak_cloudbreak_1 | at rx.Subscriber.setProducer(Subscriber.java:211)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorSingle$ParentSubscriber.onCompleted(OperatorSingle.java:110)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorTake$1.onNext(OperatorTake.java:80)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeFilter$FilterSubscriber.onNext(OnSubscribeFilter.java:76)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.emitScalar(OperatorMerge.java:395)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.tryEmit(OperatorMerge.java:355)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$InnerSubscriber.onNext(OperatorMerge.java:846)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.emitScalar(OperatorMerge.java:511)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.tryEmit(OperatorMerge.java:466)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:244)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:148)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.emitScalar(OperatorMerge.java:511)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.tryEmit(OperatorMerge.java:466)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:244)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:148)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
/cbreak_cloudbreak_1 | at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$RequestArbiter.request(RxJavaCallAdapterFactory.java:173)
/cbreak_cloudbreak_1 | at rx.Subscriber.setProducer(Subscriber.java:211)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
/cbreak_cloudbreak_1 | at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:152)
/cbreak_cloudbreak_1 | at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:138)
/cbreak_cloudbreak_1 | at rx.Observable.unsafeSubscribe(Observable.java:10142)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:248)
/cbreak_cloudbreak_1 | at rx.internal.operators.OperatorMerge$MergeSubscriber.onNext(OperatorMerge.java:148)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:77)
/cbreak_cloudbreak_1 | at rx.internal.operators.OnSubscribeRedo$2$1.onNext(OnSubscribeRedo.java:244)
/cbreak_cloudbreak_1 | at rx.internal.util.ScalarSynchronousObservable$ScalarAsyncProducer.call(ScalarSynchronousObservable.java:200)
/cbreak_cloudbreak_1 | at rx.internal.util.ScalarSynchronousObservable$2$1.call(ScalarSynchronousObservable.java:114)
/cbreak_cloudbreak_1 | at rx.internal.schedulers.CachedThreadScheduler$EventLoopWorker$1.call(CachedThreadScheduler.java:230)
/cbreak_cloudbreak_1 | at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
/cbreak_cloudbreak_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
/cbreak_cloudbreak_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
/cbreak_cloudbreak_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
/cbreak_cloudbreak_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
/cbreak_cloudbreak_1 | ... 3 common frames omitted
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,262 [reactorDispatcher-40] fireEventAndLog:25 DEBUG c.s.c.c.f.s.FlowMessageService - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:1] [name:cbllapdev30] STACK_INFRASTRUCTURE_UPDATE_FAILED [STACK_FLOW_STEP].
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,263 [reactorDispatcher-40] fireCloudbreakEvent:52 INFO c.s.c.s.e.DefaultCloudbreakEventService - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:1] [name:cbllapdev30] Firing Cloudbreak event: CloudbreakEventData{entityId=1, eventType='AVAILABLE', eventMessage='Stack update failed. Reason: Could not upscale: cbsandbox1 '}
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,263 [reactorDispatcher-40] sendEvent:112 INFO c.s.c.c.f.AbstractAction - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:1] [name:cbllapdev30] Triggering event: com.sequenceiq.cloudbreak.reactor.api.event.StackEvent@66325a31
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,264 [reactorDispatcher-44] accept:33 INFO c.s.c.s.e.CloudbreakEventHandler - [owner:undefined] [type:CLOUDBREAKEVENTDATA] [id:undefined] [name:undefined] Handling cloudbreak event: Event{id=null, headers=null, replyTo=null, key=CLOUDBREAK_EVENT, data=CloudbreakEventData{entityId=1, eventType='AVAILABLE', eventMessage='Stack update failed. Reason: Could not upscale: cbsandbox1 '}}
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,264 [reactorDispatcher-40] stateChanged:134 DEBUG c.s.c.c.f.c.AbstractFlowConfiguration - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:1] [name:cbllapdev30] state changed from ObjectState [getIds()=[ADD_INSTANCES_STATE], getClass()=class org.springframework.statemachine.state.ObjectState, hashCode()=57375442, toString()=AbstractState [id=ADD_INSTANCES_STATE, pseudoState=null, deferred=null, entryActions=[com.sequenceiq.cloudbreak.core.flow2.stack.upscale.StackUpscaleActions$1@2ceee4b6], exitActions=null, regions=[], submachine=null]] to ObjectState [getIds()=[UPSCALE_FAILED_STATE], getClass()=class org.springframework.statemachine.state.ObjectState, hashCode()=379254949, toString()=AbstractState [id=UPSCALE_FAILED_STATE, pseudoState=null, deferred=null, entryActions=[com.sequenceiq.cloudbreak.core.flow2.stack.upscale.StackUpscaleActions$9@13c8ac77], exitActions=null, regions=[], submachine=null]]
/cbreak_cloudbreak_1 | 2018-02-13 07:22:49,264 [reactorDispatcher-42] accept:140 DEBUG c.s.c.c.f.Flow2Handler - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:STACK] [id:7] [name:cbtest1] flow control event arrived: key: UPSCALEFAILHANDLED, flowid: 08bd0afd-627c-42d4-9e36-4d8232961d61, payload: com.sequenceiq.cloudbreak.reactor.api.event.StackEvent@66325a31
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
12-27-2017
08:02 AM
Thanks for the details @mmolnar. Appreciate your help on this.
... View more
12-26-2017
01:57 PM
Hi @mmolnar, In the JIRA for this issue, AMBARI-22005 i could see that the issue has been resolved in Ambari 2.6.0. Anyways i have posted a comment over there to confirm the same. Thanks, Cibi
... View more
12-21-2017
01:06 PM
Hi, I'm trying to upscale the cluster from Cloudbreak and getting the below error: update failed: New node(s) could not be added to the cluster. Reason com.sequenceiq.cloudbreak.service.CloudbreakServiceException: Ambari could not install services. Invalid Add Hosts Template: org.apache.ambari.server.topology.InvalidTopologyTemplateException: Must specify either host_name or host_count for hostgroup: worker Error log from Ambari-server: ERROR [ambari-client-thread-3771] BaseManagementHandler:67 - Bad request received: Invalid Add Hosts Template: org.apache.ambari.server.topology.InvalidTopologyTemplateException: Must specify either host_name or host_count for hostgroup: worker I also could see the same error details from Ambari server log after enabling DEBUG Log mode. Cloudbreak Version: 1.16.5 Ambari server version:- 2.6.0.0 Any insight on this issue?
... View more
12-21-2017
01:04 PM
Hi, I'm trying to upscale the cluster from Cloudbreak and getting the below error: update failed: New node(s) could not be added to the cluster. Reason com.sequenceiq.cloudbreak.service.CloudbreakServiceException: Ambari could not install services. Invalid Add Hosts Template: org.apache.ambari.server.topology.InvalidTopologyTemplateException: Must specify either host_name or host_count for hostgroup: worker Error log from Ambari-server: ERROR [ambari-client-thread-3771] BaseManagementHandler:67 - Bad request received: Invalid Add Hosts Template: org.apache.ambari.server.topology.InvalidTopologyTemplateException: Must specify either host_name or host_count for hostgroup: worker I also could see the same error details from Ambari server log after enabling DEBUG Log mode. Cloudbreak Version: 1.16.5 Ambari server version:- 2.6.0.0 Any help on the issue is appreciated! Thanks, Cibi
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks Cloudbreak
12-12-2017
02:08 PM
@pdarvasi Yes, i'm trying to test it out with a different cloudbreak deployment. Will let you know how it goes. Thanks for the help!
... View more
12-08-2017
06:04 PM
@pdarvasi Thanks for the details! But, we are not using Managed disks in our VM instances.
... View more
12-08-2017
06:00 PM
Hi, Is there a cbd shell command to terminate a unhealthy instance? Any help would be appreciated. Thanks, Cibi
... View more
12-06-2017
02:44 PM
@pdarvasi I was able to find the below ERROR from the CBD log. I can't share the LOG file since there are some sensitive data in it. /cbreak_cloudbreak_1 | 2017-12-05 18:01:07,913 [http-nio-8080-exec-2] getImage:40 DEBUG c.s.c.s.ComponentConfigProvider - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:cloudbreakLog] [id:undefined] [name:cb] Image found! stackId: 1, component: Component{id=2, componentType=IMAGE, name='IMAGE'}
/cbreak_cloudbreak_1 | 2017-12-05 18:01:07,913 [reactorDispatcher-21] accept:48 ERROR c.s.c.c.h.DownscaleStackCollectResourcesHandler - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:CLOUDBREAKEVENTDATA] [id:1] [name:cbllapdev30] Failed to handle DownscaleStackCollectResourcesRequest.
/cbreak_cloudbreak_1 | com.sequenceiq.cloudbreak.cloud.exception.CloudConnectorException: can't collect instance resources
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.azure.AzureResourceConnector.collectInstanceResourcesToRemove(AzureResourceConnector.java:293)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.azure.AzureResourceConnector.collectResourcesToRemove(AzureResourceConnector.java:241)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.azure.AzureResourceConnector.collectResourcesToRemove(AzureResourceConnector.java:50)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.DownscaleStackCollectResourcesHandler.accept(DownscaleStackCollectResourcesHandler.java:43)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.DownscaleStackCollectResourcesHandler.accept(DownscaleStackCollectResourcesHandler.java:19)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.DownscaleStackCollectResourcesHandler$FastClassBySpringCGLIB$2b40b706.invoke(<generated>)
/cbreak_cloudbreak_1 | at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.adapter.MethodBeforeAdviceInterceptor.invoke(MethodBeforeAdviceInterceptor.java:52)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
/cbreak_cloudbreak_1 | at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
/cbreak_cloudbreak_1 | at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.handler.DownscaleStackCollectResourcesHandler$EnhancerBySpringCGLIB$7e426fe2.accept(<generated>)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus$3.accept(EventBus.java:317)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus$3.accept(EventBus.java:310)
/cbreak_cloudbreak_1 | at reactor.bus.routing.ConsumerFilteringRouter.route(ConsumerFilteringRouter.java:72)
/cbreak_cloudbreak_1 | at reactor.bus.routing.TraceableDelegatingRouter.route(TraceableDelegatingRouter.java:51)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus.accept(EventBus.java:591)
/cbreak_cloudbreak_1 | at reactor.bus.EventBus.accept(EventBus.java:63)
/cbreak_cloudbreak_1 | at reactor.core.dispatch.AbstractLifecycleDispatcher.route(AbstractLifecycleDispatcher.java:160)
/cbreak_cloudbreak_1 | at reactor.core.dispatch.MultiThreadDispatcher$MultiThreadTask.run(MultiThreadDispatcher.java:74)
/cbreak_cloudbreak_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
/cbreak_cloudbreak_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
/cbreak_cloudbreak_1 | at java.lang.Thread.run(Thread.java:745)
/cbreak_cloudbreak_1 | Caused by: java.lang.NullPointerException: null
/cbreak_cloudbreak_1 | at com.sequenceiq.cloudbreak.cloud.azure.AzureResourceConnector.collectInstanceResourcesToRemove(AzureResourceConnector.java:282)
/cbreak_cloudbreak_1 | ... 25 common frames omitted
/cbreak_cloudbreak_1 | 2017-12-05 18:01:07,915 [reactorDispatcher-21] accept:52 INFO c.s.c.c.h.DownscaleStackCollectResourcesHandler - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:CLOUDBREAKEVENTDATA] [id:1] [name:cbllapdev30] DownscaleStackCollectResourcesRequest finished
/cbreak_cloudbreak_1 | 2017-12-05 18:01:07,915 [reactorDispatcher-21] accept:140 DEBUG c.s.c.c.f.Flow2Handler - [owner:d312e73a-f6dc-4e83-9452-bde66b18791f] [type:CLOUDBREAKEVENTDATA] [id:1] [name:cbllapdev30] flow control event arrived: key: DOWNSCALESTACKCOLLECTRESOURCESRESULT_ERROR, flowid: fd790366-8571-479d-98df-f8193447784c, payload: CloudPlatformResult{status=FAILED, statusReason='can't collect instance resources', errorDetails=com.sequenceiq.cloudbreak.cloud.exception.CloudConnectorException: can't collect instance resources, request=CloudStackRequest{, cloudStack=CloudStack{groups=[com.sequenceiq.cloudbreak.cloud.model.Group@e8cbf0d, com.sequenceiq.cloudbreak.cloud.model.Group@26928c18, com.sequenceiq.cloudbreak.cloud.model.Group@67e1ab4b], network=com.sequenceiq.cloudbreak.cloud.model.Network@206a2f0a, image=Image{imageName='https://sequenceiqwestus2.blob.core.windows.net/images/hdc-hdp--1706211640.vhd', userdata={CORE=#!/bin/bash
... View more
12-06-2017
10:13 AM
Cloudbreak version is 1.16.4. From the logs it shows the node's Status as decommissioned after the scaling down event. But is not terminated, which needs to be done manually. I couldn't find any events in logs which shows failure on termination of the decommissioned nodes.
... View more
12-04-2017
11:52 AM
Hi, We are having a cloudbreak deployment for managing our HDP clusters where we have enabled auto-scaling of the cluster based on the Ambari metrics. After successful downscaling of the clusters, Cloudbreak fails to terminate/delete the VW's from Azure end and marks the removed host in cloudbreak as 'Unhealthy' node. Is there any workaround for the same? Any help would be appreciated! Thanks, Cibi
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
11-29-2017
02:18 PM
@Artem Ervits @Davide Vergari
... View more
11-29-2017
01:48 PM
Hi, We had Ambari 2.5.x with HDP 2.6.1 - Zeppelin 0.7.0 - In this we Zeppelin AD integration worked fine. Then, we upgraded HDP 2.6.3 (since Druid GA was released). After the upgrade, Zeppelin 0.7.3 cannot connect to AD. There is a warning when the Web-UI loads. The logs are attached below. ##### When Zeppelin UI gets loaded in browser, we get the following Warnings, javax.servlet.ServletException: Filtered request failed.
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:384)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: javax.ws.rs.ClientErrorException.validate(Ljavax/ws/rs/core/Response;Ljavax/ws/rs/core/Response$Status$Family;)Ljavax/ws/rs/core/Response;
at javax.ws.rs.ClientErrorException.<init>(ClientErrorException.java:88)
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:503)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:198)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXRSInInterceptor.java:90)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:153)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doGet(AbstractHTTPServlet.java:211)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
... 22 more After the UI gets loaded, when we enter a AD credential, the following error occurs. ERROR [2017-11-29 10:05:40,535] ({qtp356473385-24} LoginRestApi.java[postLogin]:111) - Exception in login:
org.apache.shiro.authc.AuthenticationException: LDAP naming error while attempting to authenticate user.
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.doGetAuthenticationInfo(ActiveDirectoryGroupRealm.java:135)
at org.apache.shiro.realm.AuthenticatingRealm.getAuthenticationInfo(AuthenticatingRealm.java:568)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doSingleRealmAuthentication(ModularRealmAuthenticator.java:180)
at org.apache.shiro.authc.pam.ModularRealmAuthenticator.doAuthenticate(ModularRealmAuthenticator.java:267)
at org.apache.shiro.authc.AbstractAuthenticator.authenticate(AbstractAuthenticator.java:198)
at org.apache.shiro.mgt.AuthenticatingSecurityManager.authenticate(AuthenticatingSecurityManager.java:106)
at org.apache.shiro.mgt.DefaultSecurityManager.login(DefaultSecurityManager.java:270)
at org.apache.shiro.subject.support.DelegatingSubject.login(DelegatingSubject.java:256)
at org.apache.zeppelin.rest.LoginRestApi.postLogin(LoginRestApi.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:205)
at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:102)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)
at org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)
at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:153)
at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:167)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61)
at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108)
at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66)
at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.apache.zeppelin.server.CorsFilter.doFilter(CorsFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.naming.CommunicationException: simple bind failed: internalactivedirectory:636 [Root exception is javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty]
at com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:219)
at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2788)
at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192)
at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:210)
at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:153)
at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:83)
at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
at javax.naming.InitialContext.init(InitialContext.java:244)
at javax.naming.ldap.InitialLdapContext.<init>(InitialLdapContext.java:154)
at org.apache.shiro.realm.ldap.DefaultLdapContextFactory.createLdapContext(DefaultLdapContextFactory.java:276)
at org.apache.shiro.realm.ldap.DefaultLdapContextFactory.getLdapContext(DefaultLdapContextFactory.java:263)
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.queryForAuthenticationInfo(ActiveDirectoryGroupRealm.java:201)
at org.apache.zeppelin.realm.ActiveDirectoryGroupRealm.doGetAuthenticationInfo(ActiveDirectoryGroupRealm.java:128)
... 64 more
Caused by: javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1959)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1916)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1899)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1825)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:116)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at com.sun.jndi.ldap.Connection.run(Connection.java:860)
... 1 more
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:90)
at sun.security.validator.Validator.getInstance(Validator.java:179)
at sun.security.ssl.X509TrustManagerImpl.getValidator(X509TrustManagerImpl.java:312)
at sun.security.ssl.X509TrustManagerImpl.checkTrustedInit(X509TrustManagerImpl.java:171)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:184)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1496)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:961)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1072)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:938)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
... 5 more
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:88)
... 18 more
WARN [2017-11-29 10:05:40,542] ({qtp356473385-24} LoginRestApi.java[postLogin]:119) - {"status":"FORBIDDEN","message":"","body":""} Anyone facing similar issues ? Any help would be appreciated. Thanks!
... View more
Labels:
- Labels:
-
Apache Zeppelin