Member since
08-17-2017
54
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14219 | 08-20-2017 11:20 AM |
08-18-2017
09:56 AM
Not its not running as root .
... View more
08-18-2017
09:31 AM
hi Team , any update on this ? Best regards ~Kishore
... View more
08-18-2017
06:13 AM
Yes I did restart , it remains same . 1. Which version of ambari is it? : Version2.5.1.0 2. Is this the default File View instance or you created one. If custom File View instance then please share the details of the view configuration. : Its default . 3. By any chance do you notice any additional WARN / ERROR in ambari server log? Error in log : 18 Aug 2017 06:03:58,495 ERROR [ambari-client-thread-28] ContainerResponse:419 - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container org.apache.ambari.server.view.IllegalClusterException: Failed to get cluster information associated with this view instance at org.apache.ambari.server.view.ViewRegistry.getCluster(ViewRegistry.java:931) at org.apache.ambari.server.view.ViewContextImpl.getCluster(ViewContextImpl.java:370) at org.apache.ambari.server.view.ViewContextImpl.getPropertyValues(ViewContextImpl.java:437) at org.apache.ambari.server.view.ViewContextImpl.getProperties(ViewContextImpl.java:171) at org.apache.ambari.view.commons.hdfs.ViewPropertyHelper.getViewConfigs(ViewPropertyHelper.java:36) at org.apache.ambari.view.filebrowser.FileBrowserService.getViewConfigs(FileBrowserService.java:52) at org.apache.ambari.view.filebrowser.FileBrowserService.help(FileBrowserService.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) Caused by: org.apache.ambari.server.ClusterNotFoundException: Cluster not found, clusterId=2 at org.apache.ambari.server.state.cluster.ClustersImpl.getCluster(ClustersImpl.java:290) at org.apache.ambari.server.view.ViewRegistry.getCluster(ViewRegistry.java:928) ... 102 more
... View more
08-18-2017
05:28 AM
Hi Team , After installing HDP using the Ambari , from Menu if I click on "File View" . It get the " Issues detected Service 'hdfs' check failed" . With following exception . java.lang.NullPointerException
at org.apache.ambari.server.view.ClusterImpl.getConfigByType(ClusterImpl.java:71)
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.copyPropertiesBySite(ConfigurationBuilder.java:203)
at org.apache.ambari.view.utils.hdfs.ConfigurationBuilder.buildConfig(ConfigurationBuilder.java:311)
at org.apache.ambari.view.utils.hdfs.HdfsApi.<init>(HdfsApi.java:68)
at org.apache.ambari.view.utils.hdfs.HdfsUtil.getHdfsApi(HdfsUtil.java:150)
at org.apache.ambari.view.utils.hdfs.HdfsUtil.connectToHDFSApi(HdfsUtil.java:138)
at org.apache.ambari.view.commons.hdfs.HdfsService.hdfsSmokeTest(HdfsService.java:145)
at org.apache.ambari.view.filebrowser.HelpService.hdfsStatus(HelpService.java:95)
... View more
Labels:
- Labels:
-
Apache Hadoop
08-18-2017
05:03 AM
1 Kudo
@Geoffrey Shelton Okot Thanks for info . With that I found our some free space was available to extend . Just as side note, used following commands vgdisplay vg00 |grep Free -->> Tell how much free space available lvextend -L +3G /dev/mapper/vg00-usr --> Extend the mount point resize2fs /dev/mapper/vg00-usr --> Resize the /usr mount
... View more
08-17-2017
05:04 PM
If I need to clean the previous installation from the error node , how to do that ? Please help me with steps .
... View more
08-17-2017
05:03 PM
Team please reply my post . This is blocking us to proceed with installation .
... View more
08-17-2017
09:55 AM
@Jay SenSharma I seen the disk space requirement . I am good with that in other mount point . I have more then 100GB for two mount points . But the issue with "/usr" , its a default space we get it and cant be extended . Here is the output of command . Before install it was "1.5 G" in that mount point . I was not fail at first attempt for that node .
... View more
08-17-2017
09:27 AM
Hi Team , I am having 4 nodes into my cluster . While doing the cluster setup , i encounter with some issue in one node , remain 3 was fine. After fixing the issue at the problem node , i did the " Retry" , which started configuration again in all 4 nodes . With that new files again started getting copied into rest all node . That end up with disk space issue for "/usr" mount , while installing the "Mertic Collector" . Before fixing the error of problem node , this was sucess . I was having 1.5 GB of space free on that node before installation , now its "300MB" . What i wanted to know , why setup is creating the files again , from previous point of installation . I have a constrain on adding up more space in that mount point . Please advice , how should i proceed to make it starting from use of disk space use what it was earlier (1.5GB) ? How can i proceed from here . Best regards ~Kishore
... View more
Labels:
- Labels:
-
Apache Ambari
08-17-2017
08:59 AM
Hi Team , It works after i doing "yum clean all" . Now it leads new issue. I have 4 node in my cluster , before this error 3 was done with some warning . Now when i did the "Retry" to fix this my-sql issue , it try to reinstall all 4 . With that one node ran out of Disk space for "/usr" . I have a constraint off extending the space that mount . Not sure why , "retry" is coping the same files again , instead use the older one and copy the new files only . Can you please help me , how to handle this for the node , which is giving space issue for "Mertic Collector" ? @Rajendra Manjunath Best regards ~Kishore
... View more
- « Previous
- Next »