Member since
05-02-2017
88
Posts
173
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6594 | 09-27-2017 04:21 PM | |
2829 | 08-17-2017 06:20 PM | |
2647 | 08-17-2017 05:18 PM | |
2928 | 08-11-2017 04:12 PM | |
4256 | 08-08-2017 12:43 AM |
04-09-2020
01:26 PM
From your logs I see there are no healthy datanodes for it to try replace bad datanodes. In addition I see several slow sync error for which you will have to tune your memstore's lower and upper limit configuration to reduce the frequency of data being flushed in order to get the best out of available heap.
... View more
12-01-2017
09:44 PM
@nshelke Thanks worked fine. Tried to configure it in HA mode analogously to the HIVE service, but it didn't work out. Did you try it in HA mode as well?
... View more
08-07-2017
08:39 AM
Currently, It seems there is no option in distcp to do so. File can be replaced by a File and directory by a directory by passing "-update" option to command.
... View more
08-07-2017
06:22 AM
you can use "major_compact" command to run a major compaction on the table. In HBase shell:- hbase(main):013:0> major_compact 'tablename'
... View more
08-04-2017
06:03 AM
1 Kudo
@nshelke When we perform "ambari-server sync-ldap" then ambari actually invokes the following ambari API on "127.0.0.1" address (not on the ambari FQDN) http://127.0.0.1:8080/api/v1/ldap_sync_events - Python code snippet from "setupSecurity.py" that is used for ldap sync url = get_ambari_server_api_base(properties) + SERVER_API_LDAP_URL
admin_auth = base64.encodestring('%s:%s' % (admin_login, admin_password)).replace('\n', '')
request = urllib2.Request(url)
request.add_header('Authorization', 'Basic %s' % admin_auth)
request.add_header('X-Requested-By', 'ambari') Because ambari uses the following python script (not java) to perform the following python script [1], Hence the "-Dhttp.nonProxyHosts" proxy setting will not be used here because that property is Java specific, Python modules will not respect that property. So you should try setting the proxy settings at the OS level inside the "~/.profile", "~/.bash_profile", ENV level. Like: http_proxy="http://proxy.com:8080"
no_proxy="127.0.0.1, localhost" [1] https://github.com/apache/ambari/blob/release-2.5.1/ambari-server/src/main/python/ambari_server/setupSecurity.py#L315-L319 .
... View more
08-06-2017
02:59 PM
Hi @Radoslaw Klewin Hive LLAP does not currently support hive.server2.enable.doAs=true, hence the error I suspect.
... View more
08-08-2017
12:52 AM
That's an incorrect approach. You don't need to add xml files to the jars. As I already mentioned before, you need to add directories where those files located, not files themselves. That's how java classpath work. It accepts jars and directories only. So if you need a resource in the java classpath, you need to have it in a jar file (like you did) OR put the parent directory to the classpath. In Squirrel it can be done in the Extra classpath tab of the Driver configuration:
... View more
07-21-2017
11:55 AM
@Jay SenSharma I even tried this to set to 300, But no luck. I will try to set ambari-agent debug mode and will check the stack.
... View more
07-13-2017
09:13 AM
You can enable it by setting "hbase.replication.bulkload.enabled" to true in hbase-site.xml. For more information you can check release notes of https://issues.apache.org/jira/browse/HBASE-13153.
... View more
06-28-2017
08:31 PM
2 Kudos
This is due to Snappy version mismatch between Hadoop and Pig. You can resolve this by executing the below command before loading the Grunt Shell export HADOOP_USER_CLASSPATH_FIRST=true To avoid executing the above command everytime before loading a pig grunt shell, you can streamline the above process by adding the above line of configuration in pig-env.sh and deploy the configuration file to the nodes.
... View more