Member since
12-10-2015
48
Posts
27
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
379 | 04-27-2016 07:48 AM | |
988 | 02-04-2016 03:27 PM |
01-25-2019
04:28 PM
Hello, we have a phoenix table with about 44M records. We're creating a secondary index on it, but, after about 3 days, the index table on hbase is populated with ~ 340k records. At this rate, we expect the index to be active not before one month or more. Is there a way to speed up the index creation? We're using HDP 2.6.4 with Hbase 1.1.2 and Phoenix 4.7 Thank you
... View more
Labels:
11-30-2018
02:36 PM
1 Kudo
try to set up the property hive.compactor.worker.threads for each metastore you have
... View more
06-15-2018
07:06 AM
1 Kudo
Hi all, I installed and configured Cloudbreak 1.5.0 (TP) on an azure virtual machine, and I configured it to authenticate users against Active Directory, mapping AD groups with UAA roles. Now my question is if it is possible to configure role based authorization inside Cloudbreak UI, in particular I need to configure some user to be able to see (at least in read only mode) resources like clusters, bluprints, etc... created by other users. Is it possible? Thank you, Davide
... View more
Labels:
05-22-2018
04:15 PM
Hello @Jonas Straub,
sorry for reopening this old topic, but I'm getting the same error.
In my case, cluster is kerberized. I'm using HDP 2.6.0.3 with Ambari 2.5.0.3 and Solr 5.5 installed via Mpack. Solr authentication via SPNEGO is working fine, but when I tried to enable the ranger plugin for solr I'm getting a strange behavior, because if I configure log4j for INFO I'm getting 403 error (but ranger policies are well configured and I can see the ranger cache updated locally on the solr node), while if I set log4j to log DEBUG information I'm getting a 500 error from solr server. Looking at the source code of solr and ranger-solr it seems that ranger plugin is unable to obtain the AuthorizationContext, in fact I can see these lines in the log: 2018-05-22 13:03:17,703 [qtp537548559-18 - /solr/] DEBUG [ ] org.apache.solr.servlet.HttpSolrCall (HttpSolrCall.java:316) - no handler or core retrieved for /, follow through...
2018-05-22 13:03:17,703 [qtp537548559-18 - /solr/] DEBUG [ ] org.apache.solr.servlet.HttpSolrCall (HttpSolrCall.java:499) - PkiAuthenticationPlugin says authorization required : true
2018-05-22 13:03:17,704 [qtp537548559-18 - /solr/] DEBUG [ ] org.apache.solr.servlet.HttpSolrCall (HttpSolrCall.java:421) - AuthorizationContext : [FAILED toString()]
....
2018-05-22 13:03:17,717 [qtp537548559-18 - /solr/] ERROR [ ] org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer (RangerSolrAuthorizer.java:288) - Error getting request context!!!
java.lang.NullPointerException
at org.apache.solr.servlet.HttpSolrCall$2.getParams(HttpSolrCall.java:953)
at org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer.logAuthorizationConext(RangerSolrAuthorizer.java:279)
at org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer.authorize(RangerSolrAuthorizer.java:165)
at org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer.authorize(RangerSolrAuthorizer.java:128)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:422)
Since this version of Ambari does not support the ranger solr plugin, I had to manually edit the setup_solr_kerberos_auth.py script, adding "authorization":{"class":"org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer"}, so my current security.json file on zookeeper is the following: {"authentication":{"class": "org.apache.solr.security.KerberosPlugin"},"authorization":{"class":"org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer"}} apart of that, I followed the instructions provided here and the repo on ranger is working. Is it a missing configuration or maybe a bug? Exact versions I using are the following: ranger-solr-plugin-0.7.0.2.6.0.3-8.el6.noarch ranger_2_6_0_3_8-solr-plugin-0.7.0.2.6.0.3-8.x86_64 lucidworks-hdpsearch-2.6-100.noarch Thanks, Davide
... View more
02-01-2018
01:32 PM
1 Kudo
Hi all, I have an HDP ambari managed cluster with HDF-3.0.1 management pack installed. Ambari release is 2.6.1. I have successfully upgraded the HDF management pack to version HDF-3.1.0, but when I click on the "Install on... <cluster name>" button from the Admin view I been redirected to the "manage version" tab in Ambari, in which I'm only able to see the existing HDP version (2.6.4). How can I perform the upgrade from HDF-3.0.1 to HDF-3.1.0? thank you,
... View more
Labels:
01-11-2018
11:15 AM
Hi all, I'm using HDP 2.4.2 and I'm getting some issues running simple test like TestDFSIO and Teragen. If I execute the test with an high number of containers, some of them will be killed after 300 seconds, according to mapreduce.task.timeout property. For example, running this command: hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar teragen -Dmapreduce.job.queuename=ETL_APP -Dmapred.map.tasks=500 10000000000 /benchmarks/teragen_1T results in 64 map tasks killed by timeout. On yarn I have 6.68 TB of RAM and 1330 vCores, and only few others applications are running, so no container of the teragen job are in waiting state. (the minimum size for a container is 2048MB). The strange thing is that, if I look at the Application Master UI, I can see that those container are in state "RUNNING", but the status column is "NEW", until they will be killed by the Node Manager. In the attached file therenm-container-id-container-e244-1515410092608-62086.txt is the log from the NM for one of those container. You can see that the container request arrived at 2018-01-11 11:24:41,445 but the container was killed at 11:30:13,532, without logging anything else. The next attempt was running fine in few minutes. I then did another test, setting the property mapreduce.task.timeout to 600000 milliseconds and all containers started before the timeout, so the problem is not the container itself, but how long it tooks to start. Did anyone knows why some containers took long time to start? Thank you very much, Davide
... View more
Labels:
12-18-2017
11:56 AM
I solved the issue myself, just increasing the nifi.cluster.node.connection.timeout and nifi.cluster.node.read.timeout to 15 seconds
... View more
12-15-2017
02:24 PM
Hi all. I'm trying to import a (Kylo) template inside Nifi, but I'm having some issues. The template import process goes fine, but when I try to deploy the template to a process group the nifi UI goes in error with the following message: com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out On the nifi-app.log I can see these messages: 2017-12-15 15:18:11,005 WARN [Replicate Request Thread-5] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request POST /nifi-api/process-groups/2f9f6866-13e0-1bc2-ffff-ffffccd0318d/template-instance to tst-hdfsandbox.pochdp.csi.it:9090 due to {} com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) at com.sun.jersey.api.client.Client.handle(Client.java:652) at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:560) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:630) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:832) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:170) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153) ... 12 common frames omitted 2017-12-15 15:18:11,005 WARN [Replicate Request Thread-5] o.a.n.c.c.h.r.ThreadPoolRequestReplicator com.sun.jersey.api.client.ClientHandlerException: java.net.SocketTimeoutException: Read timed out at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155) at com.sun.jersey.api.client.Client.handle(Client.java:652) at com.sun.jersey.api.client.filter.GZIPContentEncodingFilter.handle(GZIPContentEncodingFilter.java:123) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74) at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:560) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:630) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:832) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:170) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:253) at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153) ... 12 common frames omitted 2017-12-15 15:18:11,006 WARN [Replicate Request Thread-5] o.a.n.c.c.node.NodeClusterCoordinator All nodes failed to process URI POST /nifi-api/process-groups/2f9f6866-13e0-1bc2-ffff-ffffccd0318d/template-instance. As a result, no node will be disconnected from cluster has anyone already faced this problem? Thank you,
... View more
Labels:
03-24-2017
01:49 PM
Thank you very much for your answer! But what about mysql 5.6? Is it still supported in HDP 2.5.3? Thanks!
... View more
03-24-2017
10:18 AM
Hi all, I see in the HDP 2.5.3 documentation that mysql 5.6 is no longer supported as hive nor oozie metastore backend. (hdp docs), but it was in the previous release(hdp docs). Is it an error in the documentation? What if I need to upgrade HDP from 2.4.x to HDP 2.5.x? Also, is mysql 5.7 supported? Thank you very much
... View more
Labels:
02-16-2017
09:00 AM
Hi all, I have to execute a list of alter location statements on a hive table with two thousand partitions. To do that, I create a text file with the list of alter statements (one for each partition) and submit it to beeline, but it last a couple of hour, because beeline wait for each task to be executed. Is there a way to tell beeline to submit the task to hs2 at once? Thank you
... View more
Labels:
12-15-2016
10:52 AM
Yes, I know, but my question was about if there is a plan to support it in RHEL 7 Thank you, D.
... View more
12-07-2016
04:44 PM
Hi all, does anyone know when RHEL/CentOS 7 will be supported by Apache Hawq (HDB)? Thank you!
... View more
10-12-2016
12:14 PM
1 Kudo
Hi all, we're having some issues with a Storm topology (HDP 2.5), in which we have a Kafka Spout. From the Storm UI we see that there is the following exception: java.lang.NoSuchMethodError:
org.apache.curator.utils.ZKPaths.mkdirs(Lorg/apache/zookeeper/ZooKeeper;Ljava/lang/String;ZLorg/apache/curator/utils/InternalACLProvider;Z)V
at
org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:727)
at
org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:704)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at
org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:701)
at
org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:477)
at
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:467)
at
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44)
at org.apache.storm.kafka.ZkState.writeBytes(ZkState.java:76)
at org.apache.storm.kafka.ZkState.writeJSON(ZkState.java:70)
at
org.apache.storm.kafka.PartitionManager.commit(PartitionManager.java:312)
at org.apache.storm.kafka.KafkaSpout.commit(KafkaSpout.java:236)
at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:156)
at
org.apache.storm.daemon.executor$fn__6503$fn__6518$fn__6549.invoke(executor.clj:651)
at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:745)
While using HDP 2.4 we hadn't this issue. Anyone knows why it happens? Thank you very much
... View more
Labels:
10-05-2016
01:57 PM
Hi all, I'm experiencing an high cpu usage from a mysql instance used as metastore for hive. I'm using HDP 2.5, with ACID enabled (I need ACID 'cause I'm using storm and hive streaming). I configured the Hive metastore daemon on 2 server and the compactor to run on one of them using config groups. The issue seems related to the fact that the hive transaction reaper is always running, as I can see from the logs. I see thousand of this entries: 016-10-05 13:47:17,592 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2016-10-05 13:47:17,592 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2016-10-05 13:47:17,592 DEBUG [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2925)) - Going to execute query <select txn_id from TXNS where txn_state = 'o' and txn_last_heartbeat < 1475674924000 limit 250000>
2016-10-05 13:47:17,666 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2016-10-05 13:47:17,666 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2016-10-05 13:47:17,666 DEBUG [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2925)) - Going to execute query <select txn_id from TXNS where txn_state = 'o' and txn_last_heartbeat < 1475674924000 limit 250000>
2016-10-05 13:47:17,735 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2016-10-05 13:47:17,735 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2016-10-05 13:47:17,735 DEBUG [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2925)) - Going to execute query <select txn_id from TXNS where txn_state = 'o' and txn_last_heartbeat < 1475674924000 limit 250000>
2016-10-05 13:47:17,805 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2949)) - Aborted the following transactions due to timeout: []
2016-10-05 13:47:17,805 INFO [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2960)) - Aborted 0 transactions due to timeout
2016-10-05 13:47:17,805 DEBUG [org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(2925)) - Going to execute query <select txn_id from TXNS where txn_state = 'o' and txn_last_heartbeat < 1475674924000 limit 250000> Why the metastore tries to abort the transactions every few milliseconds? Is there a way to manage this? I tried to set the property hive.timedout.txn.reaper.interval=360sbut nothing changed. Thank you very much!
... View more
Labels:
07-22-2016
12:38 PM
tried to add it, but nothing changed nameNode=hdfs://masterHA
jobTracker=master03:8032
queueName=HOYA
oozie.use.system.libpath=true
oozie.action.sharelib.for.hive=hive,hbase 91928 [main] ERROR hive.log - error in initSerDe: java.lang.ClassNotFoundException Class org.apache.hadoop.hive.hbase.HBaseSerDe not found
java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.hbase.HBaseSerDe not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:395)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.handleInsertStatementSpecPhase1(SemanticAnalyzer.java:1459)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.doPhase1(SemanticAnalyzer.java:1187)
... View more
07-22-2016
07:19 AM
Hi all, I'm trying to execute an oozie script that read data from an hbase table mapped in hive. If I execute the script via beeline it runs well, but using oozie I get the following error: 13795 [main] ERROR hive.log - error in initSerDe: java.lang.ClassNotFoundException Class org.apache.hadoop.hive.hbase.HBaseSerDe not found
java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.hbase.HBaseSerDe not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:395)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605) In the oozie shared library I added the hive-hbase-handler jar, under this path: /user/oozie/share/lib/lib_20151020075400/hive/hive-hbase-handler-1.2.1.2.3.2.0-2950.jar I also added hive-site.xml and hbase-site.xml files in the oozie shared library This is my job.properties file: nameNode=hdfs://masterHA
jobTracker=master03:8032
queueName=HOYA
oozie.use.system.libpath=true
#oozie.libpath=/user/oozie/share/lib/lib_20151020075400
oozie.wf.application.path=${nameNode}/user/root/workflow.xml By the way, in the container path on the slave, I'm unable to find the jar Current (local) dir = /grid/5/hadoop/yarn/local/usercache/root/appcache/application_1464329584635_67853/container_e70_1464329584635_67853_01_000002
------------------------
tmp
container_tokens
.container_tokens.crc
launch_container.sh
.launch_container.sh.crc
default_container_executor_session.sh
.default_container_executor_session.sh.crc
default_container_executor.sh
.default_container_executor.sh.crc
json-simple-1.1.jar
curator-client-2.6.0.jar
servlet-api-2.5.jar
slf4j-api-1.6.6.jar
curator-recipes-2.5.0.jar
joda-time-2.1.jar
jackson-databind-2.2.3.jar
stax-api-1.0.1.jar
jersey-client-1.9.jar
parquet-hadoop-bundle-1.6.0.jar
jdo-api-3.0.1.jar
libthrift-0.9.2.jar
commons-compress-1.4.1.jar
httpclient-4.2.5.jar
hadoop-annotations-2.7.1.2.3.2.0-2950.jar
xz-1.0.jar
job.xml
commons-codec-1.4.jar
jetty-all-7.6.0.v20120127.jar
hadoop-yarn-common-2.7.1.2.3.2.0-2950.jar
bonecp-0.8.0.RELEASE.jar
antlr-runtime-3.4.jar
guice-3.0.jar
hive-shims-1.2.1.2.3.2.0-2950.jar
stringtemplate-3.2.1.jar
commons-httpclient-3.1.jar
libfb303-0.9.2.jar
commons-io-2.4.jar
hive-contrib-1.2.1.2.3.2.0-2950.jar
oozie-hadoop-utils-hadoop-2-4.2.0.2.3.2.0-2950.jar
calcite-avatica-1.2.0.2.3.2.0-2950.jar
jline-2.12.jar
geronimo-jta_1.1_spec-1.1.1.jar
objenesis-2.1.jar
commons-collections4-4.0.jar
hive-shims-scheduler-1.2.1.2.3.2.0-2950.jar
aopalliance-1.0.jar
commons-collections-3.2.1.jar
activation-1.1.jar
jersey-json-1.9.jar
opencsv-2.3.jar
jersey-core-1.9.jar
oozie-sharelib-hive-4.2.0.2.3.2.0-2950.jar
hive-shims-0.20S-1.2.1.2.3.2.0-2950.jar
avro-1.7.5.jar
aws-java-sdk-1.7.4.jar
netty-3.6.2.Final.jar
commons-pool-1.5.4.jar
tez-runtime-internals-0.7.0.2.3.2.0-2950.jar
commons-cli-1.2.jar
curator-framework-2.6.0.jar
ant-1.9.1.jar
hadoop-azure-2.7.1.2.3.2.0-2950.jar
guava-11.0.2.jar
jackson-core-2.2.3.jar
hive-shims-common-1.2.1.2.3.2.0-2950.jar
ST4-4.0.4.jar
datanucleus-rdbms-3.2.9.jar
httpcore-4.2.4.jar
asm-tree-3.1.jar
javassist-3.18.1-GA.jar
slf4j-log4j12-1.6.6.jar
tez-api-0.7.0.2.3.2.0-2950.jar
hive-service-1.2.1.2.3.2.0-2950.jar
hive-cli-1.2.1.2.3.2.0-2950.jar
ivy-2.4.0.jar
jaxb-impl-2.2.3-1.jar
pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar
commons-lang3-3.3.2.jar
mr-framework
jpam-1.1.jar
asm-commons-3.1.jar
jetty-6.1.14.jar
datanucleus-api-jdo-3.2.6.jar
asm-3.1.jar
hive-exec-1.2.1.2.3.2.0-2950.jar
calcite-linq4j-1.2.0.2.3.2.0-2950.jar
hive-serde-1.2.1.2.3.2.0-2950.jar
log4j-1.2.16.jar
datanucleus-core-3.2.10.jar
hadoop-yarn-server-web-proxy-2.7.1.2.3.2.0-2950.jar
mail-1.4.jar
refreshCurrencyList.sql
jackson-xc-1.9.13.jar
jetty-util-6.1.26.hwx.jar
commons-logging-1.1.jar
guice-servlet-3.0.jar
snappy-java-1.0.5.jar
protobuf-java-2.5.0.jar
tez-common-0.7.0.2.3.2.0-2950.jar
janino-2.7.6.jar
tez-runtime-library-0.7.0.2.3.2.0-2950.jar
tez-mapreduce-0.7.0.2.3.2.0-2950.jar
hadoop-yarn-api-2.7.1.2.3.2.0-2950.jar
oro-2.0.8.jar
fst-2.24.jar
tez-dag-0.7.0.2.3.2.0-2950.jar
jsr305-2.0.3.jar
commons-lang-2.4.jar
groovy-all-2.1.6.jar
json-20090211.jar
hive-shims-0.23-1.2.1.2.3.2.0-2950.jar
hive-common-1.2.1.2.3.2.0-2950.jar
hadoop-yarn-server-resourcemanager-2.7.1.2.3.2.0-2950.jar
calcite-core-1.2.0.2.3.2.0-2950.jar
jackson-jaxrs-1.9.13.jar
hadoop-yarn-registry-2.7.1.2.3.2.0-2950.jar
geronimo-annotation_1.0_spec-1.1.1.jar
hadoop-aws-2.7.1.2.3.2.0-2950.jar
apache-curator-2.6.0.pom
oozie-sharelib-oozie-4.2.0.2.3.2.0-2950.jar
hadoop-yarn-server-applicationhistoryservice-2.7.1.2.3.2.0-2950.jar
eigenbase-properties-1.1.5.jar
hadoop-yarn-server-common-2.7.1.2.3.2.0-2950.jar
zookeeper-3.4.6.2.3.2.0-2950-tests.jar
geronimo-jaspic_1.0_spec-1.0.jar
commons-compiler-2.7.6.jar
hive-ant-1.2.1.2.3.2.0-2950.jar
zookeeper-3.4.6.2.3.2.0-2950.jar
tez-yarn-timeline-history-0.7.0.2.3.2.0-2950.jar
jettison-1.3.4.jar
hive-metastore-1.2.1.2.3.2.0-2950.jar
ant-launcher-1.9.1.jar
servlet-api-2.5-6.1.14.jar
jta-1.1.jar
jackson-annotations-2.2.3.jar
commons-dbcp-1.4.jar
derby-10.10.1.1.jar
velocity-1.5.jar
antlr-2.7.7.jar
jaxb-api-2.2.2.jar
paranamer-2.3.jar
jersey-guice-1.9.jar
azure-storage-2.2.0.jar
javax.inject-1.jar
stax-api-1.0-2.jar
tez-yarn-timeline-history-with-acls-0.7.0.2.3.2.0-2950.jar
apache-log4j-extras-1.1.jar
leveldbjni-all-1.8.jar
.job.xml.crc
action.xml
.action.xml.crc
propagation-conf.xml
hive-site.xml
hive-log4j.properties
hive-exec-log4j.properties I'm running hbase on slider. Do you have any tips? Thank you!
... View more
Labels:
06-08-2016
08:43 AM
Hi all, I've exported a template from a NiFi instance, and I would like to import it in another instance via api (curl). I've seen the nifi-api documentation, but I can't find a way to do it. Anyone know how to do it? Thank you!
... View more
Labels:
05-11-2016
12:44 PM
You need to check the permissions on the application log file in hdfs://tmp/spark-events. This file must be readable by user spark
... View more
04-27-2016
10:28 AM
Repo Description Ambari service to run and manage Apache Drill. For more information about Apache Drill visit https://drill.apache.org/
Requirements: - RHEL/CENTOS 7.1 - Ambari 2.X - HDP 2.4 - You need HDFS and Zookeeper up & running on your cluster Features: - Allows to install Apache Drill on an Ambari-managed cluster - Edit drill-overrides.conf and drill-env.sh via ambari - Integration with zookeeper Repo Info Github Repo URL https://github.com/dvergari/ambari-drill-service Github account name dvergari Repo name ambari-drill-service
... View more
- Find more articles tagged with:
- ambari-extensions
- ambari-service
- drill
- Sandbox & Learning
- service
Labels:
04-27-2016
07:48 AM
2 Kudos
Yes, you just need to connect passwordless from ambari host to all other hosts. You don't need to have root-equivalence between slaves. Could you paste the errors you get in starting components?
... View more
04-21-2016
02:50 PM
Hi Terry, could you check the following properties in hive-site.xml? hive.merge.tezfiles hive.merge.mapredfiles hive.merge.orcfile.stripe.level hive.merge.size.per.task
hive.merge.smallfiles.avgsize
... View more
04-21-2016
02:24 PM
If you are using VirtualBox, you can access ambari simply opening the browser on you host and pointing to localhost:8080. If it doesn't work, you have to set port forwarding from Machine -> Settings -> Network -> Port Forwarding
... View more
03-15-2016
08:33 AM
Hi Tamás, I see you are using openjdk 1.7. Try using openjdk 1.8 instead. Davide
... View more
03-10-2016
10:46 AM
1 Kudo
Hi Ryan, could you check if you have the right permissions on the local directory? [hawqadmin@hdpmaster01 ~]$ ls -ld /data01/hawq/masterdd/
drwx------ 16 hawqadmin hadoop 4096 Mar 1 09:19 /data01/hawq/masterdd/
[hawqadmin@hdpmaster01 ~]$ ls -l /data01/hawq/masterdd/
total 40
drwx------ 5 hawqadmin hawqadmin 38 Feb 29 15:38 base
drwx------ 2 hawqadmin hawqadmin 4096 Mar 1 09:19 global
drwx------ 2 hawqadmin hawqadmin 6 Feb 29 15:38 pg_changetracking
drwx------ 2 hawqadmin hawqadmin 17 Feb 29 15:38 pg_clog
drwx------ 2 hawqadmin hawqadmin 6 Feb 29 15:38 pg_distributedlog
drwx------ 2 hawqadmin hawqadmin 6 Feb 29 15:38 pg_distributedxidmap
-rw-rw-r-- 1 hawqadmin hawqadmin 4021 Feb 29 15:38 pg_hba.conf
-rw------- 1 hawqadmin hawqadmin 1636 Feb 29 15:38 pg_ident.conf
drwx------ 2 hawqadmin hawqadmin 156 Mar 1 00:00 pg_log
drwx------ 4 hawqadmin hawqadmin 34 Feb 29 15:38 pg_multixact
drwx------ 2 hawqadmin hawqadmin 6 Mar 1 09:19 pg_stat_tmp
drwx------ 2 hawqadmin hawqadmin 17 Feb 29 15:38 pg_subtrans
drwx------ 2 hawqadmin hawqadmin 6 Feb 29 15:38 pg_tblspc
drwx------ 2 hawqadmin hawqadmin 6 Feb 29 15:38 pg_twophase
drwx------ 2 hawqadmin hawqadmin 6 Feb 29 15:38 pg_utilitymodedtmredo
-rw------- 1 hawqadmin hawqadmin 4 Feb 29 15:38 PG_VERSION
drwx------ 3 hawqadmin hawqadmin 58 Feb 29 15:38 pg_xlog
-rw------- 1 hawqadmin hawqadmin 18393 Feb 29 15:38 postgresql.conf
-rw------- 1 hawqadmin hawqadmin 104 Feb 29 15:40 postmaster.opts
[hawqadmin@hdpmaster01 ~]$
Also, what are the permissions on the directory on hdfs? [hawqadmin@hdpmaster01 ~]$ hdfs dfs -ls -d /hawq_default
drwxr-xr-x - hawqadmin hdfs 0 2016-02-29 15:38 /hawq_default
[hawqadmin@hdpmaster01 ~]$ hdfs dfs -ls -R /hawq_default
drwx------ - hawqadmin hdfs 0 2016-02-29 15:47 /hawq_default/16385
drwx------ - hawqadmin hdfs 0 2016-03-01 08:54 /hawq_default/16385/16387
drwx------ - hawqadmin hdfs 0 2016-03-01 08:55 /hawq_default/16385/16387/16513
-rw------- 3 hawqadmin hdfs 48 2016-03-01 08:55 /hawq_default/16385/16387/16513/1
-rw------- 3 hawqadmin hdfs 4 2016-02-29 15:47 /hawq_default/16385/16387/PG_VERSION
[hawqadmin@hdpmaster01 ~]$
... View more