Member since
01-05-2016
56
Posts
23
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1871 | 09-27-2017 06:11 AM | |
1978 | 09-21-2017 06:36 PM | |
1039 | 06-15-2017 01:28 PM | |
1949 | 12-09-2016 08:39 PM | |
2030 | 09-06-2016 04:57 PM |
09-27-2017
06:11 AM
1 Kudo
Deleting the kafka topics for atlas and restart of atlas fixed the issue.
... View more
09-21-2017
06:39 PM
1 Kudo
Ah ok. Try this %jdbc(hive) You might need to check if the hive configuration is updated in the jdbc interpreter settings.
... View more
09-21-2017
06:36 PM
1 Kudo
@Sudheer Velagapudi
Here is the list of interpreters that exist in zeppelin.
https://zeppelin.apache.org/supported_interpreters.html
There is nothing specifically for SQL interpreter though. What exactly are you trying to do here?
... View more
09-21-2017
06:32 PM
This is a kerberized cluster on HDP 2.6.1 import_hive.sh is failing with the errors as below 2017-09-21 10:38:53,105 ERROR - [pool-2-thread-10 - f15b1a2e-6904-49bb-8ed5-b832632d4339:atlas:POST/api/atlas/entities/bedf80dd-deb1-42a9-81cc-40336a3d4546] ~ Unable to update entity by GUID bedf80dd-deb1-42a9-81cc-40336a3d4546 { "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Reference", "id":{ "jsonClass":"org.apache.atlas.typesystem.json.InstanceSerialization$_Id", "id":"bedf80dd-deb1-42a9-81cc-40336a3d4546", "version":0, "typeName":"hive_db", "state":"ACTIVE" }, "typeName":"hive_db", "values":{ "name":"default", "location":"hdfs://CLUSTERNAME/apps/hive/warehouse", Using Hive configuration directory [/etc/hive/conf]Log file for import is /usr/hdp/current/atlas-server/logs/import-hive.logException in thread "main" org.apache.atlas.hook.AtlasHookException: HiveMetaStoreBridge.main() failed. at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:650)Caused by: org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasBaseClient$APIInfo@69c6161d failed with status 500 (Internal Server Error) Response Body ({"error":"Failed to notify for change PARTIAL_UPDATE"}) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:337) at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:287) at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:429) at org.apache.atlas.AtlasClient.callAPIWithBodyAndParams(AtlasClient.java:1006) at org.apache.atlas.AtlasClient.updateEntity(AtlasClient.java:583) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.updateInstance(HiveMetaStoreBridge.java:526) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.registerDatabase(HiveMetaStoreBridge.java:175) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importDatabases(HiveMetaStoreBridge.java:140) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.importHiveMetadata(HiveMetaStoreBridge.java:134) at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:647)Failed to import Hive Data Model!!!
... View more
Labels:
- Labels:
-
Apache Atlas
06-15-2017
01:28 PM
1 Kudo
@Manish Gupta After the installation use following steps to replace default service account with custom service account: For zeppelin: 1. Stop zeppelin service from Ambari. 2. Change the zeppelin user from Ambari server using configs.sh (this command is only available from ambari-server host): # /var/lib/ambari-server/resources/scripts/configs.sh -u <AmbariAdminUser> -p <AmbariAdminUserPassword> set localhost <Cluster-name> zeppelin-env zeppelin_user <ZEP-USER> 3. Set the ownership on zeppelin log and run directories. # chown -R <ZEP-USER>:hadoop /var/log/zeppelin
# chown -R <ZEP-USER>:hadoop /var/run/zeppelin 4. Start zeppelin service from Ambari
... View more
03-07-2017
05:07 PM
3 Kudos
Assumption - HDP 2.5.3 and above versions and kerberized cluster. Create a hplsql-site.xml as per the below template.
<configuration>
<property>
<name>hplsql.conn.default</name>
<value>hive2conn</value>
<description>The default connection profile</description>
</property>
<property>
<name>hplsql.conn.hiveconn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://</value>
<description>HiveServer2 JDBC connection (embedded mode)</description>
</property>
<property>
<name>hplsql.conn.init.hiveconn</name>
<value>
set hive.execution.engine=mr;
use default;
</value>
<description>Statements for execute after connection to the database</description>
</property>
<property>
<name>hplsql.conn.convert.hiveconn</name><
<value>true</value>
<description>Convert SQL statements before execution</description><
</property>
<property>
<name>hplsql.conn.hive1conn</name>
<value>org.apache.hadoop.hive.jdbc.HiveDriver;jdbc:hive://</value>
<description>Hive embedded JDBC (not requiring HiveServer)</description>
</property>
<property>
<name>hplsql.conn.hive2conn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://node1.field.hortonworks.com:10500/default;principal=hive/node1.field.hortonworks.com@REALM</value>
<description>HiveServer2 JDBC connection</description>
</property>
<property>
<name>hplsql.conn.init.hive2conn</name>
<value>
set hive.execution.engine=tez;
use default;
</value>
<description>Statements for execute after connection to the database</description>
</property>
<property>
<name>hplsql.conn.convert.hive2conn</name>
<value>true</value>
<description>Convert SQL statements before execution</description>
</property>
<property>
<name>hplsql.conn.db2conn</name>
<value>com.ibm.db2.jcc.DB2Driver;jdbc:db2://localhost:50001/dbname;user;password</value>
<description>IBM DB2 connection</description>
</property>
<property>
<name>hplsql.conn.tdconn</name>
<value>com.teradata.jdbc.TeraDriver;jdbc:teradata://localhost/database=dbname,logmech=ldap;user;password</value>
<description>Teradata connection</description>
</property>
<property>
<name>hplsql.conn.mysqlconn</name>
<value>com.mysql.jdbc.Driver;jdbc:mysql://localhost/test;user;password</value>
<description>MySQL connection</description>
</property>
<property>
<name>hplsql.dual.table</name>
<value>default.dual</value>
<description>Single row, single column table for internal operations</description>
</property>
<property>
<name>hplsql.insert.values</name>
<value>native</value>
<description>How to execute INSERT VALUES statement: native (default) and select</description>
</property>
<property>
<name>hplsql.onerror</name>
<value>exception</value>
<description>Error handling behavior: exception (default), seterror and stop</description>
</property>
<property>
<name>hplsql.temp.tables</name>
<value>native</value>
<description>Temporary tables: native (default) and managed</description>
</property>
<property>
<name>hplsql.temp.tables.schema</name>
<value></value>
<description>Schema for managed temporary tables</description>
</property>
<property>
<name>hplsql.temp.tables.location</name>
<value>/tmp/plhql</value>
<description>LOcation for managed temporary tables in HDFS</description>
</property>
</configuration>
Modify
the LLAP hostname and the hive Principal based on the cluster environment in the following section Note: This is a kerberized cluster
<property>
<name>hplsql.conn.hive2conn</name>
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://<<LLAP_HOSTNAME>>:10500/default;principal=hive/<<LLAPHOSTNAME>>@<<KERBEROS_REALM>></value>
<description>HiveServer2 JDBC connection</description>
</property>
<property>
<name>hplsql.conn.init.hive2conn</name>
<value>
set hive.execution.engine=tez;
use default;
</value>
<description>Statements for execute after connection to the database</description>
</property>
Update the
hive-hplsql jar file with the modified hplsql-site.xml
cd /usr/hdp/current/hive-server2-hive2/lib;
/usr/jdk64/jdk1.8.0_77/bin/jar uf hive-hplsql-2.1.0.XXX.jar hplsql-site.xml; Note: Please refer to your JDK version path
Authenticate the user with the KDC
kinit <user principal>
Execute the HPLSQL code as below
./hplsql -f /root/myhpl.sql
If success then you must be seeing the logs as below, Starting SQL statement
SQL statement executed successfully (128 ms)
Starting SQL statement
SQL statement executed successfully (145 ms)
... View more
12-12-2016
03:20 AM
@Nube Technologies Good to hear that.
... View more
12-09-2016
08:39 PM
2 Kudos
@Nube Technologies For Hive: You can use Sqoop with the --target-dir parameter set to a directory inside the Hive encryption zone. You need to specify the -D option after sqoop import. sqoop import \-D sqoop.test.import.rootDir=<root-directory> \--target-dir <directory-inside-encryption-zone> \<additional-arguments>
For append or incremental export Make sure that the sqoop.test.import.rootDir property points to the encryption zone specified in the --target-dir argument. For HCatalog: No config required. For more information on HDP services for HDFS encryption refer this link below, HDP services for HDFS encryption Let me know if you have any other questions
... View more
12-08-2016
02:16 PM
1 Kudo
@Huahua Wei I dont think so.Transition to Failover node does not take much of time. Whichever NameNode is started first will become active. You may choose to start the cluster in a specific order such that your preferred node starts first. What problem exactly are you facing ? Is it the failover or starting up the namenode is taking lot of time?
... View more
12-07-2016
02:41 PM
@Baruch AMOUSSOU DJANGBAN Assuming ambari2 user can stop the services, Why don't you try using ambari-agent stop instead of systemctl stop ambari-agent.service
... View more