Member since
07-25-2020
34
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8981 | 02-15-2021 04:30 AM |
03-16-2021
07:08 PM
@Shelton This is output of ifconfig: eth0 Link encap:Ethernet HWaddr 08:00:27:B2:38:58 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18087 errors:0 dropped:0 overruns:0 frame:0 TX packets:13470 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15917703 (15.1 MiB) TX bytes:1478883 (1.4 MiB) eth1 Link encap:Ethernet HWaddr 08:00:27:43:DC:82 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2307 errors:0 dropped:0 overruns:0 frame:0 TX packets:443 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:253418 (247.4 KiB) TX bytes:167435 (163.5 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1617427 errors:0 dropped:0 overruns:0 frame:0 TX packets:1617427 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1110478487 (1.0 GiB) TX bytes:1110478487 (1.0 GiB). And by the way i'm working on Cloudera Quickstart VM 5.13.0
... View more
03-16-2021
10:04 AM
@Shelton Even with this it does not work. And i double checked it, my hostname it's quickstart.cloudera.
... View more
03-16-2021
01:47 AM
@Shelton Thank you for your reply, but still nothing. I have this warning in the console: WARN node.AbstractConfigurationProvider: No configuration found for this host:TwitterAgent
... View more
03-15-2021
03:37 AM
I'm trying to stream data from twitter to hdfs with Flume, i'm using Cloudera Quickstart VM 5.13, i don't have any error but the destination directory is empty. This is my flume.conf file: TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = Sp0ti7peTvFPDJSWMGk2ChMZM
TwitterAgent.sources.Twitter.consumerSecret = Cncmq5b6rKxWPb6qNSPkqpzIR7L3EcQ8WUCeG0gX4L9sPIzflN
TwitterAgent.sources.Twitter.accessToken = 1370386818609377287-IsLuhCt54wK4T2Ua9Cb0TC14rrs1c5
TwitterAgent.sources.Twitter.accessTokenSecret = AL7oYsVUQXz5KXtQSj0tu36R85MyvAsBjcgktdZD63Ou6
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientist, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://quickstart.cloudera:8020/user/flume/tweets/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
TwitterAgent.sinks.HDFS.hdfs.rollInterval = 600
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transitionCapacity = 100 I'm invoking this command to stream: flume-ng agent --conf ./conf/ -f /home/cloudera/flume.conf -n TwitterAgent Please i want to know on which part i'm doing it wrong. Any valuable suggestion is much appreciated. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Hadoop
-
HDFS
02-17-2021
03:09 AM
I'm working on Cloudera Quickstart VM 5.13 and I'm trying to start hadoop; But i'm getting these errors: Failed to start Hadoop proxyserver. Return value: 1 [FAILED] Failed to start Hadoop secondarynamenode. Return value: 1 [FAILED] In the log file for the secondarynamenode i have this exception: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: 0.0.0.0:50090 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:959) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:895) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.startInfoServer(SecondaryNameNode.java:488) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:691) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:954) ... 3 more Failed to start secondary namenode java.net.BindException: Port in use: 0.0.0.0:50090 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:959) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:895) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.startInfoServer(SecondaryNameNode.java:488) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:691) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:444) at sun.nio.ch.Net.bind(Net.java:436) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:954) ... 3 more Exiting with status 1 And this for the proxyserver: Service org.apache.hadoop.yarn.server.webproxy.WebAppProxy failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: yarn.web-proxy.address is not set so the proxy will not run. org.apache.hadoop.yarn.exceptions.YarnRuntimeException: yarn.web-proxy.address is not set so the proxy will not run. at org.apache.hadoop.yarn.server.webproxy.WebAppProxy.serviceInit(WebAppProxy.java:76) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.serviceInit(WebAppProxyServer.java:73) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.startServer(WebAppProxyServer.java:132) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.main(WebAppProxyServer.java:115) Service org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: yarn.web-proxy.address is not set so the proxy will not run. org.apache.hadoop.yarn.exceptions.YarnRuntimeException: yarn.web-proxy.address is not set so the proxy will not run. at org.apache.hadoop.yarn.server.webproxy.WebAppProxy.serviceInit(WebAppProxy.java:76) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.serviceInit(WebAppProxyServer.java:73) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.startServer(WebAppProxyServer.java:132) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.main(WebAppProxyServer.java:115) Stopping WebAppProxyServer metrics system... WebAppProxyServer metrics system stopped. WebAppProxyServer metrics system shutdown complete. Exiting with status -1
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
-
HDFS
02-15-2021
04:30 AM
I found my log4j.properties file (my case ./workspace/training/conf/log4j.properties) and i and added these two lines and it solved the problem : log4j.appender.RFA=org.apache.log4j.ConsoleAppender log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
... View more
02-13-2021
04:23 AM
I installed Cloudera Quickstart VM 5.13 on virtualbox and I'm trying to start hadoop server with the command sudo service hadoop-hdfs-namenode start But i'm getting these two errors: log4j:ERROR Could not find value for key log4j.appender.RFA log4j:ERROR Could not instantiate appender named "RFA" This is the full output of the command: [root@quickstart etc]# sudo service hadoop-hdfs-namenode start starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out log4j:ERROR Could not find value for key log4j.appender.RFA log4j:ERROR Could not instantiate appender named "RFA". log4j:WARN No appenders could be found for logger.(org.apache.hadoop.hdfs.server.namenode.NameNode). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Failed to start Hadoop namenode. Return value: 1 [FAILED] Can anyone can help me with this issue?
... View more
Labels:
- Labels:
-
HDFS
10-12-2020
06:51 AM
I installed Hortonworks Sandbox via Virtualbox. And when i started ambari every service is stopped like you can see in this screenshot . I tried to start manually each of the services but nothing happens when i click the start button. And plus, i have many erros in my notifications section as you can see in this folder. Also this is waht my ambari agent logs looks like. Any idea on how i can resolve this?
... View more
10-12-2020
06:10 AM
Here is what my amabri agent logs look like. I don't understand anything.
... View more
09-30-2020
08:53 AM
I installed Hortonworks Sandbox via Virtualbox. And when i started ambari every service is stopped like you can see in this screenshot . I tried to start manually each of the service but nothing happens when i click the start button. And plus, i have many erros in my alert section as you can see in this folder. Any idea on how i can resolve this?
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »