Member since
05-02-2017
88
Posts
173
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3666 | 09-27-2017 04:21 PM | |
1541 | 08-17-2017 06:20 PM | |
1562 | 08-17-2017 05:18 PM | |
1337 | 08-11-2017 04:12 PM | |
2119 | 08-08-2017 12:43 AM |
03-29-2019
04:25 PM
1 Kudo
@Bhushan Kandalkar I guess these files are present, but still LLAP is not able to pick the aux jars. Can you please try to add these jars into AUX jars and try start of LLAP. You can try to add them like below, - Add below 2 files in Advanced hive-interactive-env -> Auxillary JAR list /usr/hdp/2.6.5.0-292/tez_hive2/hadoop-shim-hdp-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/hadoop-shim-0.8.4.2.6.5.0-292.jar Thanks Nitin Shelke
... View more
11-29-2018
11:48 AM
@Bhushan Kandalkar - Did you tried manual kinit of the same principal on ambari-server machine like, # kinit admin/admin@REALM
Password: ******* - Check if above is working fine. If this is working fine, try adding the Credential using API call as temporary or Permanent, https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html - If it is still failing, you would need to share the ambari-server.log for the same time of failure Hope this helps!!!
... View more
12-01-2017
09:44 PM
10 Kudos
Short Description:
How to configure KNOX for Hive1 and Hive2(LLAP) in parallel. Article
By default KNOX is configured for Hive1, this article will help you to configure KNOX for Hive2(LLAP).
Step 1. Before configuring KNOX for Hive2, you have to configure Hive2 for http mode as,
Go to Ambari-> Services -> Hive -> configs -> custom hive-interactive-site.xml -> add below properties, hive.server2.thrift.http.path=cliservice
hive.server2.transport.mode=http
Restart Hive service.
Step 2. Now configure KNOX for Hive2(LLAP) as,
1) Go to the below location in your KNOX server machine:- # cd /usr/hdp/<HDP VERSION>/knox/data/services
2) Copy the hive directory present in the location and rename it as llap # cp -rp hive llap 3) Edit the services.xml and rewrite.xml as below:- # cd llap/0.13.0/
# vim service.xml
------------
<service role="LLAP" name="llap" version="0.13.0">
<routes>
<route path="/llap"/>
</routes>
<dispatch classname="org.apache.hadoop.gateway.hive.HiveDispatch" ha-classname="org.apache.hadoop.gateway.hive.HiveHaDispatch"/>
</service>
# vim rewrite.xml
------------
<rules>
<rule dir="IN" name="LLAP/llap/inbound" pattern="*://*:*/**/llap">
<rewrite template="{$serviceUrl[LLAP]}"/>
</rule>
</rules> 4) Go to Ambari -> KNOX -> configs -> Edit the Advanced topology of your KNOX service and add LLAP service as, <service>
<role>LLAP</role>
<url>http://<LLAP server hostname>:<HTTP PORT NUMBER>/{{hive_http_path}}</url>
</service> Example: <url>http://abcd.example.com:10501/cliservice</url> 5) Restart Knox service.
... View more
- Find more articles tagged with:
- hive-interactive
- How-ToTutorial
- knox-gateway
- llap
- Sandbox & Learning
Labels:
10-03-2017
03:12 PM
@Ashnee Sharma You have to install this driver on Client side and use is to connect to Hive with all the details. Also Check this link as well, https://community.hortonworks.com/questions/15667/windows-hive-connection-issue-through-odbc-using-h.html
... View more
09-27-2017
04:21 PM
@Ashnee Sharma You can find the links for ODBC drivers at, https://hortonworks.com/downloads/
... View more
09-26-2017
04:02 PM
1 Kudo
@Ashnee Sharma What version of Hive ODBC driver you are using? Are you using the In-built Hortonworks Hive Hadoop driver or DSN driver installed. Default In-built Hortonworks Hive Hadoop driver in Microsoft will not work. You have to use Other ODBC sources -> Hive DSN driver.
... View more
09-19-2017
11:35 AM
1 Kudo
Hello, When using an Ambari blueprint to change the Kafka property security.inter.broker.protocol. I am able to change other properties during deployment, but it seems that this property always defaults to PLAINTEXTSASL. Actual configuration, { "kafka-broker": { "properties": { "listeners": "PLAINTEXTSASL://localhost:6668", "security.inter.broker.protocol": "SASL_SSL" } } } Is there a BUG in Ambari-2.2.2? If yes in which version this is fixed.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
08-17-2017
06:20 PM
2 Kudos
@arjun more
Could you please check the properties, User object class* : try changing it from person to user. Group member attribute* : try changing it from memberof to memberid Distinguished name attribute* : try changing it from dn to distinguishedName These parameters are depends on your environment LDAP. Please check the values of these once again and try to sync up.
... View more
08-17-2017
05:18 PM
2 Kudos
@arjun more
Please check below URL which has similar concerns. https://community.hortonworks.com/questions/106430/is-there-any-way-to-get-the-list-of-user-who-submi.html
... View more
08-16-2017
06:40 PM
2 Kudos
@Sami Ahmad Check below article for this, https://community.hortonworks.com/articles/56704/secure-kafka-java-producer-with-kerberos.html
... View more
08-16-2017
06:33 PM
1 Kudo
@Sami Ahmad Please check below URL's https://community.hortonworks.com/questions/78843/problems-with-kafka-scripts-after-enabled-kerberos.html https://community.hortonworks.com/content/supportkb/49422/running-kafka-client-bin-scripts-in-secure-envrion.html Also check if you have a valid Kerberos ticket. If you use kinit, use this configuration. KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true;
};
If you use keytab, use this configuration: KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};
... View more
08-14-2017
02:27 PM
1 Kudo
@koteswararao kasarla First check the "exectype" in pig.properties file. If it is "exectype=tez" then set below property in tez-site.xml <property>
<name>tez.queue.name</name>
<value>myqueue</value>
</property> Check the pig execution by running service check.
... View more
08-11-2017
04:12 PM
3 Kudos
@arjun more You don't need to edit this column. As the error itself says "FOREIGN KEY (`upgrade_id`)" this will be set as @Jay SenSharma suggested. Please check the Type of column you are trying to edit, mysql> desc clusters;
+-----------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------+--------------+------+-----+---------+-------+
| cluster_id | bigint(20) | NO | PRI | NULL | |
| resource_id | bigint(20) | NO | MUL | NULL | |
| upgrade_id | bigint(20) | YES | MUL | NULL | |
| cluster_info | varchar(255) | NO | | NULL | |
| cluster_name | varchar(100) | NO | UNI | NULL | |
| provisioning_state | varchar(255) | NO | | INIT | |
| security_type | varchar(32) | NO | | NONE | |
| desired_cluster_state | varchar(255) | NO | | NULL | |
| desired_stack_id | bigint(20) | NO | MUL | NULL | |
As the Column is of type bigint(20), it's default value is "NULL". The blank filed will be show in postgresql DB because of the type of column upgrade_id.
... View more
08-09-2017
12:41 AM
@pv poreddy
Check the Id's in hoststate tables as well. If your ambari-agent on the hosts are running it will show (HEALTHY) for that ID. Check whether you it is for correct host. You have to check the ID in hosts table and match up with hoststate. Let me know if you need anything.
... View more
08-08-2017
12:50 AM
3 Kudos
@pv poreddy You can login to Ambari DB and check whether you have entries in the Database or not. psql -U ambari
Password: ****** You can get the password from, # cat /etc/ambari-server/conf/ambari.properties | grep password
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
# cat /etc/ambari-server/conf/password.dat Execute below command to get the hostlist from DB, ambari=> select * from hosts; It should give you some list of hosts. Check if you have it or not. If not, check the previous backups of Ambari-DB in the same tables.
... View more
08-08-2017
12:43 AM
1 Kudo
I have solved this issue, By adding hbase-site.xml and core-site.xml files to the phoenix jars, As Squirrel doesn't take hbase-site.xml and core-site.xml files directly to classpath, Squirrel tries to unzip them like normal jar files. By extracting phoenix jar and added hbase-site.xml and core-site.xml files to the jar again created new jar with same name. Added it to the Squirrel-sql Lib directory and restarted the Squirrel-sql. After this I am able to connect to phoenix using Squirrel-Sql. Thank you very much for your help. @Sergey Soldatov and @Josh Elser
... View more
08-07-2017
09:25 PM
3 Kudos
@Albert Stark You can configure the fencing method to avoid Split-brain scenario, https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_hadoop-ha/content/ha-nn-config-cluster.html Check the link and configure HDFS for fencing method.
... View more
08-04-2017
06:39 PM
@Enis Thanks for the quick response. After setting TTL in table we have to run the Major to delete older-than-TTL-time data right? How to do this.
... View more
08-03-2017
11:35 PM
3 Kudos
I am getting below error while doing the initial sync for ambari LDAP ambari-server sync-ldap --users /home/centos/users.txt
Using python /usr/bin/python Syncing with LDAP...
Enter Ambari Admin login: admin
Enter Ambari Admin password:
Syncing specified users and groups.ERROR:
Exiting with exit code 1.
REASON: Sync event creation failed.
Error details: HTTP Error 502: Bad Gateway I am using internal proxy server, So I setup some configuration in ambari-env.sh for this, export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS' -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=$ROOT/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Dhttp.proxyHost=FQDN -Dhttp.proxyPort=8080 -Dhttp.nonProxyHosts="FQDN|localhost|127.0.0.1"' LDAPsearch command works fine. Same configs I have added it in ambari.properties. After setting this, still getting 502: Bad Gateway error
... View more
Labels:
- Labels:
-
Apache Ambari
08-03-2017
11:27 PM
3 Kudos
I have added below block in knox topology <service>
<role>HIVE2</role>
<url>http://FQDN_LLAP_SERVER:10501/cliservice</url>
</service> Also, created the directory in "$KNOX_HOME/data/services/hive2" with service.xml and rewrite.xml files. Also enabled below properties in Hiveserver2-Interactive-site.xml file, hive.server2.thrift.http.path=cliservice
hive.server2.transport.mode=http service.xml <service role="HIVE2" name="hive2" version="0.13.0">
<routes>
<route path="/hive2"/>
</routes>
<dispatch classname="org.apache.hadoop.gateway.hive.HiveDispatch" ha-classname="org.apache.hadoop.gateway.hive.HiveHaDispatch"/>
</service>
rewrite.xml <rules>
<rule dir="IN" name="HIVE2/hive2/inbound" pattern="*://*:*/**/hive2">
<rewrite template="{$serviceUrl[HIVE2]}"/>
</rule>
</rules>
Getting below error in knox while accessing this path from ODBC driver, hadoop.gateway Failed to match path /hive2
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Knox
07-28-2017
07:25 PM
@Sergey Soldatov Thanks for response, Which directories do you want me to add in the SQL lib. I guess it should be /usr/hdp/current/hbase-master/lib/hbase*.jar and /usr/hdp/current/hadoop-client/hadoop*.jar. Please confirm.
... View more
07-28-2017
04:26 PM
@Josh Elser I tried Both getting same Error. jdbc:phoenix:zk-host-1,zk-host-2,zk-host-3:2181:/hbase-secure:user1@EXAMPLE.COM:/Users/user1/user1.headless.keytab
... View more
07-28-2017
02:20 AM
1 Kudo
Getting Below Error: java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I am using Below URL for connection, jdbc:phoenix:zk-host-1,zk-host-2,zk-host-3:2181:/hbase-secure:/Users/user1/user1.headless.keytab:user1@EXAMPLE.COM Please help?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-25-2017
05:11 PM
1 Kudo
@Avani alamut Please check the below link which contains the same error, https://community.hortonworks.com/questions/86903/hive-metastore-can-not-start-on-ambari-24-and-hdp.html
... View more
07-25-2017
03:25 PM
@Avani alamut You can go to Ambari UI and start the Metastore process. If not running, If you are not using Ambari, use below mathod then, hive -metastore start
Let me know if this helps.
... View more
07-25-2017
11:30 AM
3 Kudos
@Anurag Mishra Could you please paste the Stack trace it gives before failing. It will be helpful to debug the issue.
... View more
07-25-2017
11:29 AM
5 Kudos
@ed day Hey you don't need to worry about the admin user stuff as I can see you have the "/user/admin" directory present in the HDFS with owner "admin". Just go to HDFS, login to hdfs user and change the ownership of directory for 'ed' as, # su hdfs
# hdfs dfs -chown ed:hdfs /user/ed Let me know if this helps.
... View more