Member since
06-08-2016
13
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2506 | 06-23-2016 12:26 AM | |
5808 | 06-20-2016 01:10 AM |
10-28-2016
09:39 AM
Hi Craig. Apologies for my lateness: I solved the issue following the crumbles left in the logs. In the end it was the namespace which couldn't start because something went wrong in the last partial save (I don't remember exactly how it is, but it's somewhere in the namenode folder). So I deleted the last checkpoint and it restarted from the second to last, starting it allright. Ref: http://stackoverflow.com/questions/37962314/hue-cannot-connect-error-on-connection-refused-hbase-cannot-choose-master-hdf/37962501#37962501
... View more
06-23-2016
12:26 AM
The problem was with the FSImage file (the last one), which was corrupted and didn't allow the hdfs namenode service to start. To understand if you have my same issue: List the services and examin which one are failing. HDFS namenode probably will be there with a FAILED status Look for the open ports: 50070 (or the one you put in the conf file inside /etc/hdfs/conf... ) won't be open, hence all the services that connect to namenode receive a "Connection Refused" error. Hbase will give a "znode == null" error look for the namenode logs under /var/logs/hdfs/ *namenode*.out and look where the FSImage files are Go there: if you have one FIImage file only, as far as I know, you're screwed. If you have more than one: deleted the last one, restart the service and let the hdfs rebuild the correct FSImage from the Edit file either restart the machine or restart all of the services as specified on the Cloudera docs. Hope this will save you time.
... View more
06-21-2016
12:27 AM
Resolved: it was the FSImage file corrupted, thus no namenode was created, thus it was impossible to a access both to Hue and to Hbase. I will keep in mind your suggestion, though. Thank you very much.
... View more
06-20-2016
03:41 AM
I have a similar problem, but my permissions in the config file are selected with a * . What permissions do you mean?
... View more
06-20-2016
03:40 AM
How did you solve your problem? my 50070 door doesn't appear after a netstat -tulpn command, so I have problems all around Cloudera. Restarting the namenode didn't solve it, neither did putting a rule exception in the iptables file: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
... View more
06-20-2016
03:38 AM
As another user said: Port 50070 has stopped working. What can people who have this problem do? I see a scaring lack of support in the questions I have browsed regarding this issue.
... View more
06-20-2016
03:13 AM
1 Kudo
The answer from HTTPfs. Please help me! * About to connect() to localhost port 14000 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 14000 (#0)
> GET /webhdfs/v1/user/user.name=cloudera&op=GETFILESTATUS HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.13.6.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:14000
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
HTTP/1.1 401 Unauthorized
< Server: Apache-Coyote/1.1
Server: Apache-Coyote/1.1
< WWW-Authenticate: PseudoAuth
WWW-Authenticate: PseudoAuth
< Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
Set-Cookie: hadoop.auth=; Path=/; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly
< Content-Type: text/html;charset=utf-8
Content-Type: text/html;charset=utf-8
< Content-Length: 997
Content-Length: 997
< Date: Mon, 20 Jun 2016 09:53:43 GMT
Date: Mon, 20 Jun 2016 09:53:43 GMT
<
* Connection #0 to host localhost left intact
* Closing connection #0
<html><head><title>Apache Tomcat/6.0.44 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 401 - Authentication required</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Authentication required</u></p><p><b>description</b> <u>This request requires HTTP authentication.</u></p><HR size="1" noshade="noshade"><h3>Ap
... View more
06-20-2016
03:08 AM
1 Kudo
Here is the -v of the curl you gave: [root@quickstart cloudera]# curl -i -v "http://quickstart.cloudera:50070/webhdfs/v1/user/user.name=cloudera&op=GETFILESTATUS"
* About to connect() to quickstart.cloudera port 50070 (#0)
* Trying 10.0.2.15... Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
... View more
06-20-2016
01:10 AM
I will reply myself, as the support here is the most terrible I've seen in years. The problem was a memory one on my laptop: since I had reserved 5GB with a 8GB memory, it simply stopped loading when I have 4GB already occupied. The steps I took anyway which may have somewhat made the trick are: -reinstalled cloudera quickstart 5.5 and then mounted the VHD -stopped some services from starting (the ones I don't use: Impala, Hive, Oozie) -waited. A lot.
... View more
06-20-2016
12:36 AM
I am developing and testing on Cloudera Quickstart, I have been doing so for about a month. Friday I was going to try some put methods on HBASE, but browsing from Hue I got these errors: #Errors accessing Hue hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
Failed to access filesystem root
Hive Editor Failed to access Hive warehouse: /user/hive/warehouse
Impala Editor No available Impalad to send queries to. #Error accessing Hbase Api Error: java.net.SocketTimeoutException: callTimeout=0, callDuration=203: at [stack trace, I am omitting it] at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null at #From Hbase Log: 2016-06-19 12:06:39,306 INFO [quickstart:60000.activeMasterManager] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS 2016-06-19 12:06:39,621 INFO [master/quickstart.cloudera/127.0.0.1:60000] regionserver.HRegionServer: ClusterId : 36bbf1ec-2337-4143-b2fb-fd933333cd8f 2016-06-19 12:06:40,510 FATAL [quickstart:60000.activeMasterManager] master.HMaster: Failed to become active master java.net.ConnectException: Call From quickstart.cloudera/127.0.0.1 to quickstart.cloudera:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) [Stack trace again]
... View more
Labels: