Member since
07-05-2016
20
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
527 | 06-07-2017 06:28 PM |
09-28-2017
11:22 PM
1 Kudo
Is there a special command to list hidden files (files that start with a dot) in a given path in HDFS?
... View more
- Tags:
- Hadoop Core
- HDFS
Labels:
- Labels:
-
Apache Hadoop
09-25-2017
08:47 PM
@Shahrukh Khan The snapshot operations APIs are only capable of creating / deleting snapshot on existing HDFS folder. But, is there any way to list the folders under a given path with their snapshot status (enabled or disabled)?
... View more
09-15-2017
09:02 PM
Please post the error messages that you see in the web console. Right click anywhere on the page and click "Inspect" in Google Chrome to open Inspector. Clicking on console tab should show messages logged there.
... View more
09-13-2017
10:21 PM
I know that `hdfs lsSnapshottableDir <path>` lists all the directories under <path> that are snapshot enabled. But I want to get the list of files and folders in a <path> with snapshot status as a flag. Is it possible?
... View more
Labels:
- Labels:
-
Apache Hadoop
08-30-2017
07:15 PM
@madhu ravipati Typically, allocating systems with highest bandwidth to data nodes will help you in fast file transfers. So you are right in a way to use the 1GB/s machine to configure ambari server. Ambari server will send small packets of data as a heartbeat to all its client nodes every 3 seconds but that doesn't take as much bandwidth when compared to the data nodes. A detailed article on Cluster Network Configuration Best Practices
... View more
07-05-2017
05:57 PM
Could you please add a screenshot with console open? Console can be opened by (Right click -> Inspect Element)
... View more
06-07-2017
06:28 PM
Turn off iptables in centos and try again chkconfig iptables off
/etc/init.d/iptables stop
... View more
06-07-2017
06:06 PM
Hi @Jay SenSharma, I tried the "recursive=true" parameter, but it doesn't give any different response. It is identical with the response from API without "recursive=true" and my question was to find a file or directory within a given directory. I have updated my question to give you more clarity. Hope it helps. Thanks!
... View more
06-07-2017
06:03 PM
I have updated my question with an example. Please let me know if you still can't understand my question.
... View more
06-06-2017
10:12 PM
How do I search for a filename or directory name recursively in a given path in WebHDFS? For example: If I search for "hive" from the root directory "/", I expect a response with directory names or file names matching "hive" under "/" such as /hdp/apps/2.6.0.1-89/hive
/hdp/apps/2.6.0.1-89/hive/hive.tar.gz
... View more
Labels:
- Labels:
-
Apache Hadoop
06-03-2017
03:58 AM
Grafana SMTP: If you configure SMTP in Grafana that ships with Ambari and enable email notifications, new users who are added to Grafana will be able to receive a "New User" email notification. Apart from that, grafana will not be able to send email notifications on alerts as this feature is only supported in Grafana v4.0 and above. Ambari SMTP: Configuring SMTP for Ambari will let you receive email notifications on alert status changes on configured groups and severity levels. For example: if two alerts go CRITICAL, Ambari sends one email that says "Alert A is CRITICAL and Ambari B alert is CRITICAL". Ambari will not send another email notification until status has changed again. More details on configuring SMTP for Ambari is presented here - https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-operations/content/creating_or_editing_notifications.html
... View more
03-28-2017
03:28 AM
Sounds like I am hitting OutOfMemory errors: 2017-03-27 11:32:45,199 ERROR Error for /webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz java.lang.OutOfMemoryError: Java heap space at java.lang.String.toLowerCase(String.java:2590) at org.apache.hadoop.util.StringUtils.toLowerCase(StringUtils.java:1040) at org.apache.hadoop.hdfs.web.AuthFilter.toLowerCase(AuthFilter.java:102) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:82) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2017-03-27 11:36:15,852 ERROR Error for /webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3664) at java.lang.String.<init>(String.java:207) at java.lang.String.toLowerCase(String.java:2647) at org.apache.hadoop.util.StringUtils.toLowerCase(StringUtils.java:1040) at org.apache.hadoop.hdfs.web.AuthFilter.toLowerCase(AuthFilter.java:102) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:82) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
... View more
03-27-2017
06:43 PM
I deployed a cluster of 3 nodes with Ambari 2.4.0.1 and am unable to get the History Server started. Error: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 190, in <module>
HistoryServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 101, in start
host_sys_prepped=params.host_sys_prepped)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py", line 257, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 255, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 269, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 322, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 192, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz 'http://test-1.c.i.internal:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=500.
... View more
Labels:
- Labels:
-
Apache Ambari
03-22-2017
08:42 PM
Yes, Ambari supports same user across different browser sessions. But the changes that you make in one session might not get reflected immediately in the other session until a page reload.
... View more
03-22-2017
07:09 PM
1 Kudo
Hi @w hao, Upgrading jQuery to 1.9.1 will break Ambari since the version of Ember used by Ambari requires jQuery 1.7. But if you want to know how to upgrade any javascript library used by ambari, here is the way to do it: Download the source code from git for the version you want - https://github.com/apache/ambari/archive/release-2.2.2.zip Make sure you have nodejs, npm and brunch installed to build the frontend code cd into ambari-web/vendor/scripts and replace the javascript library with the version you want Make changes in ambari-web/config.coffee file to reflect the modified library name and version Run "npm install" from ambari-web Run "brunch b" from ambari-web This should create target files in ambari-web/public. Copy the vendor.js file from ambari-web/public/javascripts to the ambari server node and replace vendor.js in path /usr/lib/ambari-server/web/javascripts/
... View more