Member since
10-01-2018
5
Posts
0
Kudos Received
0
Solutions
10-29-2018
03:41 PM
@Soumitra SulavThank you for your help.. and yes sorry other files like libraries, ambari data, user data, tmp data will be taking so much off data. actually there were some preproccesed data stored for which we were unaware sorry and still thank you
... View more
10-29-2018
09:16 AM
We have uploaded 9 gb of data in HDFS and we have configured 3 nodes and nodes block size as default is 128MB,as we know hadoop replicate data in 3 nodes now in this case if we have uploaded 9GB of data it should comsume 9 X 3 GB = 27GB however what we can see in the below attached screenshot that it is taking 27GB in each datanode. Can someone please help to understand what went wrong.
... View more
Labels:
- Labels:
-
Apache Hadoop
10-03-2018
02:51 PM
All service is not starting getting failed while starting zookeeper service. once i change zookeeper:hadoop to root:root again changed to zookeeper:hadoop still same.
... View more
10-03-2018
05:55 AM
@Aditya Sirna there were some issue in properties in hbase-site.xml fixed it. from configuring the properties from ambari itself. but there is some issue popped in ambari that zookeeper is getting stopped. but when running from CUI with ./zkServer.sh start it is successfully running. but it is not getting reflected in ambari service.. checked the log error "permission denied - "FAILDED TO WRITE PID File" default permission was hadoop zookeeper changed to root:root still same. do you know that the datadir for zookeeper takes from root or hdfs(user) or please suggest to solve it. Thank you in advance. And really thanks for the above solution.
... View more
10-01-2018
04:40 PM
When creating table through hbase shell. it is stuck forever. checked log and found" namespace doesnt exist in meta but has a znode" next Terminating Master Please help
... View more
Labels:
- Labels:
-
Apache HBase