Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1788 | 09-11-2019 10:19 AM | |
| 9427 | 11-26-2018 07:04 PM | |
| 2560 | 11-14-2018 12:10 PM | |
| 5563 | 11-14-2018 12:09 PM | |
| 3244 | 11-12-2018 01:19 PM |
06-28-2018
03:11 PM
@Sami Ahmad yes its possible to read hbase data using rest api. You need to start the rest api server first: $ hbase rest start By default this will listen on port 8080. URL used should be similar to this: http://<myhost>:8080/<mytable>/<rowkey1>/<cf:q>/ Documentation at: https://hbase.apache.org/1.2/apidocs/org/apache/hadoop/hbase/rest/package-summary.html HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-21-2018
04:28 PM
@Robert Cornell Try this --conf spark.executor.extraJavaOptions='-Dlog4j.configuration=log4j.properties' --driver-java-options -Dlog4j.configuration=config/log4j.properties --files config/log4j.properties I just removed the directory for the executor. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-21-2018
12:05 PM
@Robert Cornell I see you are using a path config in executor extraJavaOptions. This wont work. Please copy my example and use paths only when I use paths - reference file name only without path when I also did so. HTH
... View more
06-20-2018
03:56 PM
2 Kudos
This article with cover step by step how to configure HDP Search Solr with Ranger Plugin Step 1 Download and install the 2.2.9+ mpack - FYI: Previous mpack versions don't support integration of HDP Search Solr with Ranger. Mpack 2.2.9 includes configurable section for the solr-security on ambari which allows to add the authorization information. wget 'http://public-repo-1.hortonworks.com/HDP-SOLR/hdp-solr-ambari-mp/solr-service-mpack-2.2.9.tar.gz'; -O /tmp/solr-service-mpack-2.2.9.tar.gz
ambari-server install-mpack --mpack=/tmp/solr-service-mpack-2.2.9.tar.gz Step 2 On HDP solr host run yum install ranger-solr-plugin.noarch
cd /usr/hdp/2.6.2.0-205/ranger-solr-plugin Edit install.properties and make sure the following settings at least are properly configured: POLICY_MGR_URL=http://<ranger-host>:6080
SQL_CONNECTOR_JAR=/usr/share/java/mysql-connector-java.jar Edit solr-plugin-install.properties and set correct value for install dir: COMPONENT_INSTALL_DIR_NAME=/opt/lucidworks-hdpsearch/solr/server Next source the environment and enable the plugin: source /etc/hadoop/hadoop-env.sh
./enable-solr-plugin.sh Step 3 Update security znode with ranger authorization class kinit -kt solr.service.keytab solr/<host>@REALM.COM
/opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -z '<zookeeper>:2181' -cmd put /solr/security.json '{"authentication":{"class": "org.apache.solr.security.KerberosPlugin"},"authorization":{"class": "org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer"}}' Also on Ambari->Solr-Config->Advance solr-security set: {
"authentication":{"class": "org.apache.solr.security.KerberosPlugin"},
"authorization":{"class": "org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer"}
} Save and restart, on the operation start output you should see - call['/opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper1>:2181,<zookeeper2>:2181,<zookeeper3>:2181 -cmd get /solr/security.json'] {'timeout': 60, 'env': {'JAVA_HOME': u'/usr/jdk64/jdk1.8.0_112'}}
- call returned (0, '{\"authentication\":{\"class\": \"org.apache.solr.security.KerberosPlugin\"},\"authorization\":{\"class\": \"org.apache.ranger.authorization.solr.authorizer.RangerSolrAuthorizer\"}}')
- Solr Security Json was found, it will not be overridden Step 4 Fix clustername for solr plugin cd /opt/lucidworks-hdpsearch/solr/server/solr-webapp/webapp/WEB-INF/classes/ Edit ranger-solr-audit.xml and add following property <property>
<name>ranger.plugin.solr.ambari.cluster.name</name>
<value>YOUR_CLUSTER_NAME</value>
</property> Restart solr for changes to reflect. Step 5 Open Ranger Admin UI and edit solr repository add New Configurations tag.download.auth.users = solr
policy.download.auth.users = solr
ambari.service.check.user = ambari-qa
... View more
Labels:
06-20-2018
12:57 PM
1 Kudo
@Mahesh From same node you are running the spark-submit command check if you are able to connect to zookeeper using zkCli.sh cd /usr/hdp/current/zookeeper-client/bin/
./zkCli.sh --server HDF-cluster-ip-address:2181
... View more
06-20-2018
12:27 PM
@Ilia K Try switching to user yarn and running yarn rmadmin -refreshQueues See if the above command executes successfully. Then run as follows (note I use iliak for queue name only): spark-submit --num-executors 10--executor-memory 2g--master yarn --deploy-mode cluster --queue iliak --conf spark.yarn.submit.waitAppCompletion=false--files run.py Another thing you could try is to switch the ordering policy to Empty, save and test again. HTH
... View more
06-20-2018
11:59 AM
@Sriram I have same values exactly as you listed above. Could you share a screenshot of drop down menu next to your username > Interpreter > Filter by spark? Also could you confirm if regular %spark works?
... View more
06-19-2018
02:35 PM
@Sriram Please make sure R is install following the steps described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_spark-component-guide/content/ch_spark-r.html You can read more about sparkr interpreter here: https://zeppelin.apache.org/docs/0.6.2/interpreter/r.html As per "spark.r interpreter not found" make sure you have spark interpreter installed and configured. And that you are using correct %<name> - For example, in my environment I only use spark2 hence I need to use interpreter %spark2.r instead of %spark.r (which fails with same error as yours) - You can check by clicking on top right drop down menu next to your username > Interpreter > Filter by spark. Finally review the Spark interpreter configuration settings for R HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
06-19-2018
02:02 PM
@Anpan K Follow the following steps to disable hdfs audit: https://community.hortonworks.com/questions/101082/disable-log4j-logging-for-hdfs-audit-log.html Please remember to mark the answer.
... View more
06-19-2018
04:31 AM
@Anpan K HDFS audit is different than Ranger audit. They are complementary. The benifit of using Ranger to check the audit is that everything is on centralized location, while hdfs audit you have to check on each node local disk. HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more