Member since
03-23-2017
110
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1463 | 06-01-2017 09:30 AM | |
1176 | 05-31-2017 08:41 AM | |
2118 | 05-15-2017 06:15 AM | |
2080 | 05-11-2017 02:40 AM | |
1098 | 05-11-2017 02:36 AM |
10-19-2017
01:59 PM
Does distcp between two s3 clusters work? If yes, is it same as regular DistCp or how can it be achieved?
... View more
Labels:
- Labels:
-
Apache Hadoop
10-18-2017
02:24 PM
So, if i we have some amount of disks in datanodes can we leverage the solution?
... View more
10-18-2017
02:14 PM
@Joseph Niemiec Well i think i need to more clear on my earlier question. We will be using S3 as data node storage and not namenode storage.
... View more
10-18-2017
01:36 PM
@Joseph Niemiec HDP blog below tells we can use per bucket settings to access data across globe which i assume is from different regions. If you dont mind, could you please elaborate your comment "S3 can't be used as replacement for HDFS"?? https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_cloud-data-access/content/s3-per-bucket-region-configs.html
... View more
10-18-2017
01:08 PM
Hi @Joseph Niemiec Thanks for the insights. We are planning of a solution where we have two custom S3 storage layers one each in different Data centers and will configure each as one rack in hadoop. Thus trying to make use of hadoop rack-awareness to have copies of blocks in both the data centers. Not sure if this is gonna work but was just doing a lot of search on this solution.
... View more
10-18-2017
11:17 AM
Does using S3 as storage layer in Hadoop has the same replication factor (default 3)??? I see various blogs telling when we distcp data from HDFS to S3, replication will be ignored and only 1 replica will be stored. Is that True?
... View more
Labels:
- Labels:
-
Apache Hadoop
07-20-2017
04:46 AM
My bad the issue was 8440 was not allowed in my AWS Security Groups.
... View more
07-19-2017
11:24 AM
I'm using ambari 2.5 with Centos 7. Im unable to register agents with server. Below is the error. INFO 2017-07-19 07:08:38,620 ExitHelper.py:56 - Performing cleanup before exiting...
INFO 2017-07-19 07:08:50,315 main.py:145 - loglevel=logging.INFO
INFO 2017-07-19 07:08:50,315 main.py:145 - loglevel=logging.INFO
INFO 2017-07-19 07:08:50,315 main.py:145 - loglevel=logging.INFO
INFO 2017-07-19 07:08:50,317 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-07-19 07:08:50,318 DataCleaner.py:120 - Data cleanup started
INFO 2017-07-19 07:08:50,318 DataCleaner.py:122 - Data cleanup finished
INFO 2017-07-19 07:08:50,322 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-07-19 07:08:50,323 main.py:437 - Connecting to Ambari server at https://ip-172-31-27-38.ap-south-1.compute.internal:8440 (172.31.27.38)
INFO 2017-07-19 07:08:50,323 NetUtil.py:70 - Connecting to https://ip-172-31-27-38.ap-south-1.compute.internal:8440/ca
WARNING 2017-07-19 07:10:57,538 NetUtil.py:101 - Failed to connect to https://ip-172-31-27-38.ap-south-1.compute.internal:8440/ca due to [Errno 110] Connection timed out
WARNING 2017-07-19 07:10:57,539 NetUtil.py:124 - Server at https://ip-172-31-27-38.ap-south-1.compute.internal:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-07-19 07:11:07,539 NetUtil.py:70 - Connecting to https://ip-172-31-27-38.ap-south-1.compute.internal:8440/ca
... View more
Labels:
- Labels:
-
Apache Ambari
06-08-2017
08:43 AM
Hi @yvora thanks for the details. Is there any possibility of specifying hive to use spark2 rather than spark??
... View more
06-08-2017
04:37 AM
Hi @yvora, I have enabled debug. The yarn application log shows below error. 17/06/08 09:51:49 WARN Rpc: Invalid log level null, reverting to default. 17/06/08 09:51:50 ERROR ApplicationMaster: User class threw exception: java.util.concurrent.ExecutionException: javax.security.sasl.SaslExceptio n: Client closed before SASL negotiation finished.
java.util.concurrent.ExecutionException: javax.security.sasl.SaslException: Client closed before SASL negotiation finished.
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:156)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$anon$2.run(ApplicationMaster.scala:559)
Caused by: javax.security.sasl.SaslException: Client closed before SASL negotiation finished.
at org.apache.hive.spark.client.rpc.Rpc$SaslClientHandler.dispose(Rpc.java:449)
at org.apache.hive.spark.client.rpc.SaslHandler.channelInactive(SaslHandler.java:90)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:208)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:194)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at org.apache.hive.spark.client.rpc.KryoMessageCodec.channelInactive(KryoMessageCodec.java:127)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:208)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:194)
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:208)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:194)
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:828)
at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:621)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
17/06/08 09:51:50 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.util.concurrent.Exec utionException: javax.security.sasl.SaslException: Client closed before SASL negotiation finished.)
17/06/08 09:51:59 ERROR ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
... View more