Member since
06-15-2016
9
Posts
1
Kudos Received
0
Solutions
02-25-2019
01:53 PM
Hi, recently we ran into the same problem after few years of successful imports. Did you maybe find a way to overcome the problem?
... View more
06-20-2016
08:05 AM
1 Kudo
Thank you @Rob Ketcherside for useful information! Yes, I do have green lights on hdfs, yarn and all other services in Ambari. If I restart hdp it comes up reasonable fast.
I will also ask Isilon administrator to send me required reports from Isilon. In the moment everyting looks ok, sqoop import is working, hive queries also (they are very very slow, but that is another problem). My concern was to set up a reliable monitoring and I now I have enough data thanks to you guys. All the best
... View more
06-16-2016
09:25 AM
Hello @Rob Ketcherside, @emaxwell, thanks a lot to remind me about that. It looks I still think to much in a hdfs way here. You see first I deployed a HDP on VM's and running it in a production for a few months. After few months I had about 300 mio of records in hive and I had to move everything to a new infrastructure HPD on Isilon. The deployment and migration was successfull and I'm loading new data now every day...
It was easy to check if hdfs is healthy before, I could check it from ambari or command line, but now with Isilon I'm not so sure about it. Ambari is not showing me some useful info about it and also from shell I could not find some real info. I can run hdfs dfsadmin -report, but I'm no sure if this is a right way now.
There is about 350GB of data in hdfs in the moment, but report shows a different number... hdfs dfsadmin -report Configured Capacity: 356881289379840 (324.58 TB)
Present Capacity: 356881289379840 (324.58 TB)
DFS Remaining: 350800989585408 (319.05 TB)
DFS Used: 6080299794432 (5.53 TB)
DFS Used%: 1.70%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0 How do you cope with that? How can you realy monitor hdp and hdfs on Isilon to be sure everything is running ok?
... View more
06-15-2016
01:04 PM
Hello, I have a working hdp 2.3 on EMC Isilon. I'm able to write to hdfs and "hdfs dfs" set of commands is working ok. But I have a problem to run a "hdfs fsck /" command, I get an error. Maybe some of you guys have any idea obout it? I will appreciate. [hdfs@hostname ~]$ hdfs fsck / Connecting to
namenode via http://xx.yy.ad.si:8082/fsck?ugi=hdfs&path=%2F
Exception in
thread "main" java.io.IOException: Server returned HTTP response
code: 403 for URL: http://xx.yy.ad.si:8082/fsck?ugi=hdfs&path=%2F
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1839)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:339)
at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:73)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:151)
at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:148)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:379)
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)