Member since
02-02-2017
43
Posts
24
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9607 | 09-22-2018 04:48 AM | |
2330 | 09-19-2018 09:15 PM | |
1146 | 09-17-2018 06:52 PM | |
959 | 09-14-2018 09:19 AM | |
3661 | 09-07-2018 10:36 AM |
08-25-2019
12:11 PM
1 Kudo
Do you see any errors in /var/log/das/das-webapp.log? This log file would be on das-webapp host if possible please attach the file to the thread.
... View more
09-22-2018
04:57 AM
1 Kudo
@Simon Waligo What version of Python you are using (i.e Default Python version for the OS set to?) As of Ambaro 2.7.1, Python 2.7 is supported, ref: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.1.0/bk_ambari-installation/content/mmsr_software_reqs.html Also If you are on 2.7, you can try below command to see if it helps? pip install importlib
... View more
09-22-2018
04:48 AM
2 Kudos
@Sami Ahmad Below steps will help you in setting up ODBC.. 1. Check on what Mode my Hiveserver2 is running, From Ambari -> Hive -> Conf check property value hive.server2.transport.mode If the above is set to binary, in that case, your Hiveserver2 is running on Binary Mode i.e based on above configs it should be port 10000, otherwise, if it is HTTP then your HS2 is running on PORT 10001 for HTTP Transport mode, set transport mode as HTTP and HTTP Path as cliservice If it is binary then set Thrift Transport mode to default SASL. 2. Test the connectivity between end Client and Hiveserver2 host by using ping or telnet on the port we got from above point. I would check FQDN of HS2 host from end client and set the same in Host for ODBC. 3. Check property hive.server2.authentication value, if it is None, you can set Authentication Mechanism in ODBC config as just Username and pass the end user otherwise if it is Kerberos, you can set Kerberos for it. In case of LDAP pass username/password with auth as username and password. 4. Test Connection to HS2. Let me know if it helps?
... View more
09-19-2018
09:19 PM
@Gopal Mehakare The exception posted is very generic. Can you attach or check Yarn Application logs for the query run?
... View more
09-19-2018
09:15 PM
@Swati Sahoo You can download Hortonworks JDBC Driver for Apache Hive from https://hortonworks.com/downloads/ -> HDP Addons, it includes all the required jars to connect to Hive via JDBC, also on top of that it also includes documentation on how you can use Simba Hive JDBC Driver. Please let me know if it helos?
... View more
09-19-2018
06:09 AM
@Anurag Mishra By Default Ambati should take care while starting Atlas service, it runs acl script to provide the access below. if the same is not run you can re-run it. /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_HOOK --allow-principal User:* --producer
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_HOOK --allow-principal User:atlas --consumer --group atlas
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_ENTITIES --allow-principal User:atlas --producer
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_ENTITIES --allow-principal User:rangertagsync --consumer --group ranger_entities_consumer
... View more
09-19-2018
05:58 AM
@Bala Kolla
You can go by the article https://community.hortonworks.com/articles/149899/investigating-when-llap-doesnt-start.html Let me know if it helps in narrowing down the problem?
... View more
09-18-2018
05:28 AM
@Hariprasanth Madhavan Please mark the answer, if you see it correct 🙂
... View more
09-17-2018
09:26 PM
1 Kudo
Here is the sample code which may help you. <html>
<body>
<?php
$conn=odbc_connect('TestODBC','hive','');
if (!$conn){
exit("Connection to the database Failed: " . $conn);
}
$sql="show databases";
$resultSet=odbc_exec($conn, $sql);
if (!$resultSet){
exit("Error!");}
echo "<table><tr>";
echo "<th>Database Name</th>";
while (odbc_fetch_row($resultSet))
{
$dbName=odbc_result($resultSet,"database_name");
echo "<tr><td>$dbName</td></tr>";
}
odbc_close($conn);
echo "</table>";
?>
</body>
</html> Note: I do not have any authentication metod set for Hive, so I have used AuthMEth=2 in my odbc.ini file as below. [TestODBC]
Description=Hortonworks Hive
Driver=/usr/lib/hive/lib/native/Linux-amd64-64/libhortonworkshiveodbc64.so
Host=HiveServer2Hotname
PORT=10000
schema=default
HiveServerType=2
AuthMech=2
ThriftTransport=0
Below is Sample Output
... View more
09-17-2018
06:52 PM
1 Kudo
@Matt Andruff Accessing Ambari, Atlas, Zeppelin would give a Annonymous access to it's UIs, and Authenticatin is managed by that component individually by default, meaning User can access the UIs using Knox and will have to provide logins configured for it. With Zeppelin 0.8 onwards which comes with HDP 3.0, there is a support for KnoxSSO which can be used to login to ZeppelinUI. Ref: https://zeppelin.apache.org/docs/0.8.0/setup/security/shiro_authentication.html#knox-sso https://issues.apache.org/jira/browse/ZEPPELIN-3090 https://knox.apache.org/books/knox-0-13-0/dev-guide.html#KnoxSSO+Integration Up till HDP 2.6.5 does not support Zeppelin KnoxSSO Integration.
... View more
09-17-2018
06:46 AM
@Anurag Mishra The problem could possibly because of Kafka topic permissions, You may want to check permissions for Kafka topic ATLAS_HOOK, if you are using ranger please follow below instructions. Create following Kafka policies:
topic=ATLAS_HOOK permission=publish, create; group=public permission=consume, create; user=atlas (for non-kerberized environments, set group=public) topic=ATLAS_ENTITIES permission=publish, create; user=atlas (for non-kerberized environments, set group=public) permission=consume, create; group=public Also adding to that if Ranger is not in use then you may want to run below commands as Kafka User to give permissions. /usr/hdp/current/kafka-broker/bin/kafka-acls.sh
--add --group * --allow-principal User:* --operation All
--authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181"
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh
--add --topic ATLAS_ENTITIES --allow-principal User:* --operation All
--authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181"
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh
--add --topic ATLAS_HOOK --allow-principal User:* --operation All
--authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181"
... View more
09-14-2018
09:41 AM
1 Kudo
You're Welcome. I am unsure of the JIRA for which the change has been made, I believe it is https://issues.apache.org/jira/browse/RANGER-2004, https://issues.apache.org/jira/browse/ATLAS-2459 Having said that, I have tested in my lab env with 3.0 and it works 🙂
... View more
09-14-2018
09:19 AM
2 Kudos
As of HDP 2.6.5 all resources of Atlas supports * for resources only in Ranger. There is an improvement for the granular level of access control in Ranger 1.1 which comes with HDP 3.0 onwards.
... View more
09-07-2018
10:36 AM
3 Kudos
@Hariprasanth Madhavan You can connect via ODBC to Query HiveServer2. ODBC Driver can be downloaded from: https://hortonworks.com/downloads/ -> HDP Addons
... View more
09-03-2018
11:24 AM
@Anurag Mishra You may have another process listening on 9083 and 10000 port, can you check if the is another process listening on it? netstat -tualnp |grep :9083 netstat -tualnp |grep :10000 If you find stale process, you can kill it and try starting service again.
... View more
08-29-2018
07:05 AM
@Praveen Kumar Can you check Oozie launcher job and get the sqoopmapreduce application id and then check that logs? Or you can attach the log to the thread.
... View more
08-27-2018
04:49 AM
3 Kudos
@Lian Jiang Looks like default python version is 3.0, you might want to change to 2.7.x. Please refer supported python release: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-installation/content/mmsr_software_reqs.html
... View more
08-24-2018
06:32 PM
What exception do you see when running the query? You can check the hive view logs in Ambari Server For Hive View 1.5, you can check /var/log/ambari-server/hive-next-view/hive-view.log For Hive View 1.0: /var/log/ambari-server/ambari-server.log You may also check hiveserver2.log file on Hiveserver2 host while creating a table.
... View more
08-24-2018
06:48 AM
@Sai Krishna Makineni Somehow the query does not work on Hive 1.2.1, Your query looks good, can you check data format in ts column? I had in below format YYYY-mm-dd HH:MM:SS
... View more
08-20-2018
06:57 PM
1 Kudo
Below query may help you with your use case? select * from test where data>(select FROM_UNIXTIME(UNIX_TIMESTAMP()-86400)) Where data column holds data in timestamp format -> YYYY-mm-dd HH:MM:SS To export using beeline below is command can be used. beeline --outputformat=csv2 -u "JDBC_CONNECT_STRING" -n USERNAME -e "select * from test where data>(select FROM_UNIXTIME(UNIX_TIMESTAMP()-86400))" > /tmp/output.txt
OR if you want to exclude headers, you can run beeline --showheader=false --outputformat=csv2 -u "JDBC_CONNECT_STRING" -n USERNAME -e "select * from test where data>(select FROM_UNIXTIME(UNIX_TIMESTAMP()-86400))" > /tmp/output.txt
... View more
08-16-2018
05:25 PM
@Gourav Gupta Please let me know if above helps? You can mark the answer as accepted.
... View more
08-16-2018
05:25 PM
@Aaron Bossert Please let me know if the above helps?
... View more
08-16-2018
05:23 PM
It looks Yarn job is running as end-user (Kerberos ticket user). You need to have the user on all the Nodemanager hosts with the same UserID. If you are using AD/LDAP user for Kerberos ticket, you may want to sync users to all the Nodemanagers via SSSD.
... View more
08-16-2018
11:23 AM
1 Kudo
You're Welcome. I would start with RangerAdminRESTClient.java
... View more
08-16-2018
10:06 AM
It is a REST call to Ranger Admin. Property ranger.plugin.<plugin_name>.policy.rest.url will be used to communicate to Ranger Admin. eg: ranger.plugin.hive.policy.rest.url for Hive and by default it checks every 30 seconds with Ranger Admin to check if there are any changes with regards to current policy cached, and if so, it downloads the new policy and caches the same. Default Policy Cache location would be /etc/ranger/<CLUSTER_NAME>_<PLUGIN_COMPONENT_NAME>/policycache on the host where service is runing eg:- /etc/ranger/hdptest_hive/policycache on Hiveserver2 for my cluster.
... View more
08-16-2018
09:26 AM
1 Kudo
@Shashank V C All the plugins that use Ranger as an authorization module will cache local policy and use the same for authorization purpose. Below is an excerpt from Apache Ranger overview: Plugins are lightweight Java programs which embed within processes of each cluster component. For example, the Apache Ranger plugin for Apache Hive is embedded within Hiveserver2. These plugins pull in policies from a central server and store them locally in a file. When a user request comes through the component, these plugins intercept the request and evaluate it against the security policy. Plugins also collect data from the user request and follow a separate thread to send this data back to the audit server.
Reference: https://hortonworks.com/apache/ranger/#section_2 PS: Please mark the answer if you find it correct 🙂
... View more
08-13-2018
08:26 PM
You can use below command to install hive and use beeline brew install hive Or you can get all the required jars used by beeline command. to find the same you can launch beeline in you HDP cluster and run lsof -p <beelineclientpid> to find jars loaded and copy to local Mac. A better solution would be to use JDBC Client tool such as DBeaver or DBVisualizer for Mac
... View more
08-13-2018
07:03 PM
1 Kudo
Was the upgrade performed from pre HDP 2.6.3? if so you might want to follow below document to remove duplicates from Ranger db. HDP 2.6.3 introduces unique constraints on a few tables in Ranger DB. Depending upon your environment, Ranger DB may contain duplicate data in these tables prior to the upgrade. To make the upgrade faster, you can manually delete this duplicate data in Ranger DB before performing the upgrade. These steps are optional, but recommended, and only needed for Ranger users. These should be performed after registering and installing the target HDP version but before actually performing the upgrade. Ref: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-upgrade/content/upgrade_remove_duplicate_ranger_entries.html
... View more