Member since
05-05-2014
21
Posts
0
Kudos Received
0
Solutions
10-29-2018
07:39 PM
Hi Team,
We are trying to access Azure BLOB storage from hdfs and somehow we are unable to do this at the moment .
We have secured environment with proxies, so any outgoing traffic passes through the proxy, i already whitelisted the blob URL and i can access and upload files into BLOB storage from local linux system on the same machine where hadoop is installed.
However when i try to access to azure BLOB storage with hdfs command, it just stucks and does not give any error.
Following is the command and the output:
hdfs dfs -ls wasbs://xxxx@xxxxxxxx.blob.core.windows.net/
it get stuck after these steps :
16/12/05 15:45:57 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
16/12/05 15:45:57 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
16/12/05 15:45:57 INFO impl.MetricsSystemImpl: azure-file-system metrics system started
Even enabling debug:
export HADOOP_ROOT_LOGGER=DEBUG,console
18/10/29 10:49:47 DEBUG util.Shell: setsid exited with exit code 0
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: i :: Ignore failures during copy ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: p [ARG] :: preserve status (rbugpcaxt)(replication, block-size, user, group, permission, checksum-type, ACL, XATTR, timestamps). If -p is specified with no <arg>, then preserves replication, block size, user, group, permission, checksum type and timestamps. raw.* xattrs are preserved when both the source and destination paths are in the /.reserved/raw hierarchy (HDFS only). raw.* xattrpreservation is independent of the -p flag. Refer to the DistCp documentation for more details. ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: update :: Update target, copying only missingfiles or directories ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: delete :: Delete from target, files missing in source ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: mapredSslConf [ARG] :: Configuration for ssl config file, to use with hftps://. Must be in the classpath. ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: numListstatusThreads [ARG] :: Number of threads to use for building file listing (max 40). ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: m [ARG] :: Max number of concurrent maps to use for copy ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: f [ARG] :: List of files that need to be copied ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: atomic :: Commit all changes or none ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: tmp [ARG] :: Intermediate work path to be used for atomic commit ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: log [ARG] :: Folder on DFS where distcp execution logs are saved ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: v :: Log additional info (path, size) in the SKIP/COPY log ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: strategy [ARG] :: Copy strategy to use. Default is dividing work based on file sizes ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: skipcrccheck :: Whether to skip CRC checks between source and target paths. ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: overwrite :: Choose to overwrite target files unconditionally, even if they exist. ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: append :: Reuse existing data in target files and append new data to them if possible ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: diff [ARG...] :: Use snapshot diff report to identify the difference between source and target ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: async :: Should distcp execution be blocking ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: filelimit [ARG] :: (Deprecated!) Limit number of files copied to <= n ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: sizelimit [ARG] :: (Deprecated!) Limit number of files copied to <= n bytes ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: bandwidth [ARG] :: Specify bandwidth per map in MB ]
18/10/29 10:49:47 DEBUG tools.OptionsParser: Adding option [ option: filters [ARG] :: The path to a file containing a list of strings for paths to be excluded from the copy. ]
18/10/29 10:49:47 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
18/10/29 10:49:47 DEBUG security.Groups: Creating new Groups object
18/10/29 10:49:47 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/10/29 10:49:47 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
18/10/29 10:49:47 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
18/10/29 10:49:47 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
18/10/29 10:49:48 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
18/10/29 10:49:48 DEBUG security.UserGroupInformation: hadoop login
18/10/29 10:49:48 DEBUG security.UserGroupInformation: hadoop login commit
18/10/29 10:49:48 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: svc_hdfs
18/10/29 10:49:48 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: svc_hdfs" with name svc_hdfs
18/10/29 10:49:48 DEBUG security.UserGroupInformation: User entry: "svc_hdfs"
18/10/29 10:49:48 DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject.
18/10/29 10:49:48 DEBUG security.UserGroupInformation: UGI loginUser:svc_hdfs (auth:SIMPLE)
18/10/29 10:49:48 DEBUG gcs.GoogleHadoopFileSystemBase: GHFS version: 1.8.1.2.6.5.0-292
18/10/29 10:49:48 DEBUG configuration.ConfigurationUtils: ConfigurationUtils.locate(): base is null, name is hadoop-metrics2-azure-file-system.properties
18/10/29 10:49:48 DEBUG configuration.ConfigurationUtils: ConfigurationUtils.locate(): base is null, name is hadoop-metrics2.properties
18/10/29 10:49:48 DEBUG configuration.ConfigurationUtils: Loading configuration from the context classpath (hadoop-metrics2.properties)
18/10/29 10:49:48 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
18/10/29 10:49:48 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
18/10/29 10:49:48 INFO impl.MetricsSystemImpl: azure-file-system metrics system started
18/10/29 10:49:48 DEBUG azure.AzureNativeFileSystemStore: AzureNativeFileSystemStore init. Settings=8,false,90,{3000,3000,30000,30},{true,1.0,1.0}
18/10/29 10:49:48 DEBUG azure.AzureNativeFileSystemStore: Page blob directories:
18/10/29 10:49:48 DEBUG azure.AzureNativeFileSystemStore: Block blobs with compaction directories:
18/10/29 10:49:48 DEBUG azure.AzureNativeFileSystemStore: Atomic rename directories: /hbase
18/10/29 10:49:48 DEBUG azure.NativeAzureFileSystem: NativeAzureFileSystem. Initializing.
18/10/29 10:49:48 DEBUG azure.NativeAzureFileSystem: blockSize = 536870912
18/10/29 10:49:48 DEBUG azure.NativeAzureFileSystem: Getting the file status for wasbs:// /user
18/10/29 10:49:48 DEBUG azure.AzureNativeFileSystemStore: Retrieving metadata for user
18/10/29 10:49:48 DEBUG azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: SendingRequest: threadId=1, requestType=read , isFirstRequest=true, sleepDuration=0
Can anyone please help on this .Let me know if any additional configuration is required .
Regards,
Vishal
... View more
Labels:
- Labels:
-
Apache Hadoop
06-21-2018
06:05 PM
Hi , Can somebody help me on configuring ambari integration with multiple AD . example ; AD1 abc.hadoop.com AD2 abcd.hadoop.com AD1 abc.hadoop.com trust AD2 abcd.hadoop.com not vice versa . Regards, Vishal
... View more
Labels:
- Labels:
-
Apache Ambari
06-12-2018
06:57 AM
Knoxsso.xml federation pac4j true pac4j.callbackUrl https://knoxhost:8443/gateway/knoxsso/api/v1/websso clientName SAML2Client saml.identityProviderMetadataPath https://xxxxxxxx/app/exk1bs9c6clt0ttLo2p7/sso/saml/metadata saml.serviceProviderMetadataPath /tmp/sp-metadata.xml saml.serviceProviderEntityId https://knoxhost:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client identity-assertion Default true principal.mappingtest1@jmfamily.com=tester,admin=admin KNOXSSO knoxsso.cookie.secure.only true knoxsso.token.ttl 30000 knoxsso.redirect.whitelist.regex ^https:\/\/xxxxx\.xxxxx\.com|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*{replace49}lt;/value>
... View more
06-12-2018
06:57 AM
Hi There, Following the below document from Hortonworks we have configured the KNOXSSO using OKTA(SAML). But, while accessing ambari web UI using Okta single sign on, the redirecturl is unable access the KNOX end point. Could you please share your thoughts on troubleshooting the issue as shown in the screenshots below. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_security/content/ch02s09s01.html#saml_based_idp Federation provider: pac4j SAML IDP provider: Okta Service provider: KnoxSSO gateway-audit.log error:18/06/07 17:01:39 ||2c5194ce-fb4e-4049-bdb9-dac767934214|audit|172.20.100.241|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client|failure| gateway.log : 2018-06-07 17:01:39,605 ERROR hadoop.gateway (GatewayServlet.java:service(146)) - Gateway processing failed: javax.servlet.ServletException: org.pac4j.saml.exceptions.SAMLException: Error decoding saml message javax.servlet.ServletException: org.pac4j.saml.exceptions.SAMLException: Error decoding saml message at org.apache.hadoop.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:70) at org.apache.hadoop.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:332) at org.apache.hadoop.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:232) at org.apache.hadoop.gateway.GatewayFilter.doFilter(GatewayFilter.java:139) at org.apache.hadoop.gateway.GatewayFilter.doFilter(GatewayFilter.java:91) at org.apache.hadoop.gateway.GatewayServlet.service(GatewayServlet.java:141) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.apache.hadoop.gateway.trace.TraceHandler.handle(TraceHandler.java:51) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.apache.hadoop.gateway.filter.CorrelationHandler.handle(CorrelationHandler.java:39) at org.eclipse.jetty.servlets.gzip.GzipHandler.handle(GzipHandler.java:479) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.apache.hadoop.gateway.filter.PortMappingHelperHandler.handle(PortMappingHelperHandler.java:92) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:499) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:745) Caused by: org.pac4j.saml.exceptions.SAMLException: Error decoding saml message at org.pac4j.saml.sso.impl.SAML2WebSSOMessageReceiver.receiveMessage(SAML2WebSSOMessageReceiver.java:43) at org.pac4j.saml.sso.impl.SAML2WebSSOProfileHandler.receive(SAML2WebSSOProfileHandler.java:35) at org.pac4j.saml.client.SAML2Client.lambda$clientInit$0(SAML2Client.java:110) at org.pac4j.core.client.BaseClient.retrieveCredentials(BaseClient.java:61) at org.pac4j.core.client.IndirectClient.getCredentials(IndirectClient.java:125) at org.pac4j.core.engine.DefaultCallbackLogic.perform(DefaultCallbackLogic.java:79) at org.pac4j.j2e.filter.CallbackFilter.internalFilter(CallbackFilter.java:77) at org.pac4j.j2e.filter.AbstractConfigFilter.doFilter(AbstractConfigFilter.java:81) at org.apache.hadoop.gateway.pac4j.filter.Pac4jDispatcherFilter.doFilter(Pac4jDispatcherFilter.java:220) at org.apache.hadoop.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:332) at org.apache.hadoop.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:232) at org.apache.hadoop.gateway.filter.XForwardedHeaderFilter.doFilter(XForwardedHeaderFilter.java:30) at org.apache.hadoop.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:61) ... 32 more Caused by: org.opensaml.messaging.decoder.MessageDecodingException: This message decoder only supports the HTTP POST method at org.pac4j.saml.transport.Pac4jHTTPPostDecoder.doDecode(Pac4jHTTPPostDecoder.java:57) at org.opensaml.messaging.decoder.AbstractMessageDecoder.decode(AbstractMessageDecoder.java:58) at org.pac4j.saml.sso.impl.SAML2WebSSOMessageReceiver.receiveMessage(SAML2WebSSOMessageReceiver.java:40) ... 44 more
... View more
06-12-2018
06:39 AM
Knoxsso.xml <topology>
<gateway>
<provider>
<role>federation</role>
<name>pac4j</name>
<enabled>true</enabled>
<param>
<name>pac4j.callbackUrl</name>
<value>https://knoxhost:8443/gateway/knoxsso/api/v1/websso</value>
</param>
<param>
<name>clientName</name>
<value>SAML2Client</value>
</param>
<param>
<name>saml.identityProviderMetadataPath</name>
<value>https://xxxxxxxx/app/exk1bs9c6clt0ttLo2p7/sso/saml/metadata</value>
</param>
<param>
<name>saml.serviceProviderMetadataPath</name>
<value>/tmp/sp-metadata.xml</value>
</param>
<param>
<name>saml.serviceProviderEntityId</name>
<value>https://knoxhost:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
<param>
<name>principal.mapping</name>
<value>test1@jmfamily.com=tester,admin=admin</value>
</param>
</provider>
</gateway>
<service>
<role>KNOXSSO</role>
<param>
<name>knoxsso.cookie.secure.only</name>
<value>true</value>
</param>
<param>
<name>knoxsso.token.ttl</name>
<value>30000</value>
</param>
<param>
<name>knoxsso.redirect.whitelist.regex</name>
<value>^https:\/\/xxxxx\.xxxxx\.com|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*{replace49}lt;/value>
</param>
</service>
</topology>
... View more
06-12-2018
06:34 AM
Hi There, Following the below document from Hortonworks we have
configured the KNOXSSO using OKTA(SAML). But, while accessing ambari web UI
using Okta single sign on, the redirecturl is unable access the KNOX end point.
Could you please share your thoughts on troubleshooting the issue as shown in
the screenshots below. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_security/content/ch02s09s01.html#saml_based_idp Federation provider: pac4j SAML IDP provider: Okta Service provider: KnoxSSO gateway-audit.log error:18/06/07 17:01:39 ||2c5194ce-fb4e-4049-bdb9-dac767934214|audit|172.20.100.241|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client|failure| gateway.log : 2018-06-07 17:01:39,605 ERROR hadoop.gateway (GatewayServlet.java:service(146)) - Gateway processing failed: javax.servlet.ServletException: org.pac4j.saml.exceptions.SAMLException: Error decoding saml message
javax.servlet.ServletException: org.pac4j.saml.exceptions.SAMLException: Error decoding saml message
at org.apache.hadoop.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:70)
at org.apache.hadoop.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:332)
at org.apache.hadoop.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:232)
at org.apache.hadoop.gateway.GatewayFilter.doFilter(GatewayFilter.java:139)
at org.apache.hadoop.gateway.GatewayFilter.doFilter(GatewayFilter.java:91)
at org.apache.hadoop.gateway.GatewayServlet.service(GatewayServlet.java:141)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.hadoop.gateway.trace.TraceHandler.handle(TraceHandler.java:51) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.hadoop.gateway.filter.CorrelationHandler.handle(CorrelationHandler.java:39)
at org.eclipse.jetty.servlets.gzip.GzipHandler.handle(GzipHandler.java:479)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.apache.hadoop.gateway.filter.PortMappingHelperHandler.handle(PortMappingHelperHandler.java:92)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.pac4j.saml.exceptions.SAMLException: Error decoding saml message
at org.pac4j.saml.sso.impl.SAML2WebSSOMessageReceiver.receiveMessage(SAML2WebSSOMessageReceiver.java:43)
at org.pac4j.saml.sso.impl.SAML2WebSSOProfileHandler.receive(SAML2WebSSOProfileHandler.java:35)
at org.pac4j.saml.client.SAML2Client.lambda$clientInit$0(SAML2Client.java:110)
at org.pac4j.core.client.BaseClient.retrieveCredentials(BaseClient.java:61)
at org.pac4j.core.client.IndirectClient.getCredentials(IndirectClient.java:125)
at org.pac4j.core.engine.DefaultCallbackLogic.perform(DefaultCallbackLogic.java:79)
at org.pac4j.j2e.filter.CallbackFilter.internalFilter(CallbackFilter.java:77)
at org.pac4j.j2e.filter.AbstractConfigFilter.doFilter(AbstractConfigFilter.java:81)
at org.apache.hadoop.gateway.pac4j.filter.Pac4jDispatcherFilter.doFilter(Pac4jDispatcherFilter.java:220)
at org.apache.hadoop.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:332)
at org.apache.hadoop.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:232)
at org.apache.hadoop.gateway.filter.XForwardedHeaderFilter.doFilter(XForwardedHeaderFilter.java:30)
at org.apache.hadoop.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:61)
... 32 more
Caused by: org.opensaml.messaging.decoder.MessageDecodingException: This message decoder only supports the HTTP POST method
at org.pac4j.saml.transport.Pac4jHTTPPostDecoder.doDecode(Pac4jHTTPPostDecoder.java:57)
at org.opensaml.messaging.decoder.AbstractMessageDecoder.decode(AbstractMessageDecoder.java:58)
at org.pac4j.saml.sso.impl.SAML2WebSSOMessageReceiver.receiveMessage(SAML2WebSSOMessageReceiver.java:40)
... 44 more Can somebody please help for the above issue . Regards, Vishal
... View more
05-28-2018
11:57 AM
Hi Aditya, Can you suggest any other opensource service provider and configuration . We want to show a small poc to our client before implementing it . Regards, Vishal
... View more
05-25-2018
01:18 PM
Hi Team, We have to configure SAML authentication for one of our client . Can somebody please guide us on this .
... View more
05-24-2018
06:31 AM
Hi Team, We have to enable SMAl authentication for one of our client . Can somebody please guide us on this . Regards, Vishal
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
05-15-2018
08:13 AM
Hi Team, We have configured HDF 3.1.1 single node instance for POC . He we are trying to perform a small test : We have written a small python program to generate kafka message : from kafka import KafkaProducer
from kafka.errors import KafkaError
producer = KafkaProducer(bootstrap_servers=' hostname:6667')
topic = "jsontest"
producer.send(topic, b'test message') We want to consume this message thorough Nifi Consumekafka ,which is not working . nifi.app.log (type=HEARTBEAT, length=2086 bytes) from hdfdemo.c.corecompetetraining.internal:9090 in 1 millis
2018-05-15 07:51:19,808 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2018-05-15 07:51:19,804 and sent to hdfdemo.c.corecompetetraining.internal:9088 at 2018-05-15 07:51:19,808; send took 4 millis
2018-05-15 07:51:20,352 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:21,505 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:22,357 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:23,560 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:24,557 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats in 8392 nanos
2018-05-15 07:51:24,668 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:24,811 INFO [Process Cluster Protocol Request-8] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 8aec83a3-cfbc-43c2-918d-143d701d13a8 (type=HEARTBEAT, length=2086 bytes) from hdfdemo.c.corecompetetraining.internal:9090 in 1 millis
2018-05-15 07:51:24,815 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2018-05-15 07:51:24,809 and sent to hdfdemo.c.corecompetetraining.internal:9088 at 2018-05-15 07:51:24,815; send took 6 millis
2018-05-15 07:51:25,582 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:26,735 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:27,888 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:28,941 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:29,558 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 1 heartbeats in 7009 nanos
2018-05-15 07:51:29,817 INFO [Process Cluster Protocol Request-9] o.a.n.c.p.impl.SocketProtocolListener Finished processing request fcae4257-6a34-4691-9427-530ed061faf1 (type=HEARTBEAT, length=2086 bytes) from hdfdemo.c.corecompetetraining.internal:9090 in 1 millis
2018-05-15 07:51:29,819 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2018-05-15 07:51:29,815 and sent to hdfdemo.c.corecompetetraining.internal:9088 at 2018-05-15 07:51:29,819; send took 3 millis
2018-05-15 07:51:29,944 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:30,846 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available.
2018-05-15 07:51:31,999 WARN [Timer-Driven Process Thread-5] org.apache.kafka.clients.NetworkClient [Consumer clientId=consumer-11, groupId=test] Connection to node -1 could not be established. Broker may not be available nifi-user.log 2018-05-15 08:06:11,913 INFO [NiFi Web Server-387] org.apache.nifi.web.filter.RequestLogger Attempting request for (anonymous) GET http://hostname:9090/nifi-api/flow/status (source ip: ) Please help on this . Regards, Vishal
... View more
Labels:
04-20-2018
09:01 AM
REATE TABLE IF NOT EXISTS hbase_hive_table(key string, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf:json") TBLPROPERTIES ("hbase.table.name" = "hbase_hive_table");
Getting log thread is interrupted, since query is done!
Error: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'Error' ':' 'Error' (state=42000,code=40000)
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'Error' ':' 'Error'
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:277)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:263)
at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:303)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:244)
at org.apache.hive.beeline.Commands.execute(Commands.java:871)
at org.apache.hive.beeline.Commands.sql(Commands.java:729)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1000)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:835)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:793)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'Error' ':' 'Error'
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:324)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:148)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:228)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:264)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:479)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:466)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:509)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1377)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1362)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.parse.ParseException:line 1:0 cannot recognize input near 'Error' ':' 'Error'
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:214)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:171)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:438)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:321)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1224)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1218)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146)
... 15 more
... View more
Labels:
- Labels:
-
Apache Hive
04-20-2018
08:59 AM
CREATE TABLE IF NOT EXISTS hbase_hive_table(key string, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf:json") TBLPROPERTIES ("hbase.table.name" = "hbase_hive_table"); Error: Getting log thread is interrupted, since query is done!
Error: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'Error' ':' 'Error' (state=42000,code=40000)
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'Error' ':' 'Error'
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:277)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:263)
at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:303)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:244)
at org.apache.hive.beeline.Commands.execute(Commands.java:871)
at org.apache.hive.beeline.Commands.sql(Commands.java:729)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1000)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:835)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:793)
at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:493)
at org.apache.hive.beeline.BeeLine.main(BeeLine.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'Error' ':' 'Error'
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:324)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:148)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:228)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:264)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:479)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:466)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:509)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1377)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1362)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.parse.ParseException:line 1:0 cannot recognize input near 'Error' ':' 'Error'
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:214)
at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:171)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:438)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:321)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1224)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1218)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146)
... 15 more
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
05-20-2014
11:14 AM
Hi Helen, Thanks for reply.Yes I am using Cloudera Manager. Thanks&Regards Vishal
... View more
05-12-2014
06:03 AM
Unhandled error java.lang.NoSuchMethodError: twitter4j.conf.Configuration.isStallWarningsEnabled()Z at twitter4j.TwitterStreamImpl.<init>(TwitterStreamImpl.java:60) at twitter4j.TwitterStreamFactory.<clinit>(TwitterStreamFactory.java:40) at com.cloudera.flume.source.TwitterSource.<init>(TwitterSource.java:64) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:42) at org.apache.flume.node.AbstractConfigurationProvider.loadSources(AbstractConfigurationProvider.java:327) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:102) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Hi All, I am using Cloudera manager to Configure the flume to do ywitter analysis.I making the all the required changes I am getting the above error after restarting the agent
... View more
Labels:
- Labels:
-
Apache Flume
-
Cloudera Manager