Member since
05-22-2017
126
Posts
16
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1583 | 02-07-2019 11:03 AM | |
3927 | 08-09-2018 05:08 AM | |
697 | 07-06-2018 07:51 AM | |
2078 | 06-22-2018 02:28 PM | |
1763 | 05-29-2018 01:14 PM |
09-17-2020
12:49 AM
You can use public hortonworks repo - https://repo.hortonworks.com/content/groups/public/ You may not find exact version which you mentioned. But you can check repo, you can use dependencies as per your cluster version. You can try below dependency : <dependency> <groupId> com.hortonworks.shc </groupId> <artifactId> shc-core </artifactId> <version> 1.1.0.3.1.5.0-152 </version> </dependency> Let me know it works. It should be compatible.
... View more
02-07-2019
11:03 AM
You can copy hbase-site.xml in your directory and make changes in that hbase-site.xml. Then export below property and launch sqlline. export HBASE_CONF_DIR=<new directory where you have copied hbase-site.xml>
... View more
08-09-2018
05:59 AM
Below are the high level requirements which are needed to
connect to Secure Hbase cluster - hbase-client - Hbase config file - Kerberos config files and keytab for user Pom file and Sample Code are given below: Java Class: (Change Paths for config files and kerberos
related parameters): package com.hortonworks.hbase;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.TableName;import org.apache.hadoop.hbase.client.Connection;import org.apache.hadoop.hbase.client.ConnectionFactory;import org.apache.hadoop.hbase.client.ResultScanner;import org.apache.hadoop.hbase.client.Scan;import org.apache.hadoop.hbase.client.Table;import org.apache.hadoop.security.UserGroupInformation;public class HBaseConnection { public
static void main(String ar[]) throws IOException { //
System Properties (Change Path/Properties according to env) //
copy krb5.conf from cluster System.setProperty("java.security.krb5.conf",
"/Users/schhabra/krb5.conf"); System.setProperty("javax.security.auth.useSubjectCredsOnly",
"true"); //
Configuration (Change Path/Properties according to env) Configuration
configuration = HBaseConfiguration.create(); configuration.set("hadoop.security.authentication",
"Kerberos"); //
copy hbase-site.xml and hdfs-site.xml from cluster and set paths configuration.addResource(new
Path("file:///Users/schhabra/hbase-site.xml")); configuration.addResource(new
Path("file:///Users/schhabra/hdfs-site.xml")); UserGroupInformation.setConfiguration(configuration); //
User information (Change Path/Properties according to env) UserGroupInformation.loginUserFromKeytab("ambari-qa-c1201@HWX.COM", "/Users/schhabra/smokeuser.headless.keytab"); //
Connection Connection
connection =
ConnectionFactory.createConnection(HBaseConfiguration.create(configuration)); System.out.println(connection.getAdmin().isTableAvailable(TableName.valueOf("SYSTEM.STATS"))); Scan
scan1 = new Scan(); Table
table = connection.getTable(TableName.valueOf("test")); ResultScanner
scanner = table.getScanner(scan1); }} POM: (Dependencies) <project
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.hortonworks</groupId>
<artifactId>hbase</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>hbase</name>
<url>http://maven.apache.org</url> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <repositories>
<repository>
<id>HDP</id>
<name>HDP Releases</name>
<!--url>http://repo.hortonworks.com/content/repositories/releases/</url-->
<url>http://repo.hortonworks.com/content/groups/public</url>
</repository>
</repositories> <dependencies> <dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency> <dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.1.2.2.5.0.0-1245</version> </dependency>
</dependencies> </project>
... View more
- Find more articles tagged with:
- connection
- Data Science & Advanced Analytics
- FAQ
- HBase
- How-ToTutorial
- Kerberos
- Security
Labels:
08-09-2018
05:16 AM
Try below Sample code : pom.xml:
<repositories>
<repository>
<id>HDP</id>
<name>HDP Releases</name>
<!--url>http://repo.hortonworks.com/content/repositories/releases/</url-->
<url>http://repo.hortonworks.com/content/groups/public</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hase-client</artifactId>
<version>1.1.2.2.5.0.0-1245</version>
</dependency>
</dependencies>
Java code:
package com.hortonworks.hbase;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.security.UserGroupInformation;
public class HBaseConnection {
public static void main(String ar[]) throws IOException {
//System Properties (Change Path/Properties according to env)
//copy krb5.conf from cluster
System.setProperty("java.security.krb5.conf", "/Users/schhabra/krb5.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "true");
//Configuration (Change Path/Properties according to env)
Configuration configuration = HBaseConfiguration.create();
configuration.set("hadoop.security.authentication", "Kerberos");
//copy hbase-site.xml and hdfs-site.xml from cluster and set paths
configuration.addResource(new Path("file:///Users/schhabra/hbase-site.xml"));
configuration.addResource(new Path("file:///Users/schhabra/hdfs-site.xml"));
UserGroupInformation.setConfiguration(configuration);
//User information (Change Path/Properties according to env)
UserGroupInformation.loginUserFromKeytab("ambari-qa-c1201@HWX.COM",
"/Users/schhabra/smokeuser.headless.keytab");
//Connection
Connection connection = ConnectionFactory.createConnection(HBaseConfiguration.create(configuration));
System.out.println(connection.getAdmin().isTableAvailable(TableName.valueOf("SYSTEM.STATS")));
Scan scan1 = new Scan();
Table table = connection.getTable(TableName.valueOf("test"));
ResultScanner scanner = table.getScanner(scan1);
}
}
... View more
08-09-2018
05:08 AM
@Michael Graml it is not possible, if coordinator is killed, workflows will be killed.
... View more
08-09-2018
04:57 AM
Can you try something like spark-shell --master=yarn --jars /home/<>/spark-sql-kafka-0-10_2.11-2.1.1.jar,<>/libs/kafka-clients-0.10.1.2.6.2.0-205.jar
... View more
07-30-2018
08:43 PM
Check whether you are able to telnet to RM:8050 and also check netstat output on RM machine whether you see any connections from node on which service check is running.
... View more
07-30-2018
08:33 PM
1 Kudo
Ensure that Phoenix query server has updated hbase-site.xml file with phoenix.schema.isNamespaceMappingEnabled=true and PQS/Hbase should be restarted after making changes.
... View more
07-27-2018
08:15 AM
You can check -https://community.hortonworks.com/articles/19016/connect-to-phoenix-hbase-using-dbvisualizer.html
... View more
07-06-2018
08:31 AM
You can refer this code -https://github.com/apache/spark/tree/master/examples. You can import this into intellij and try connecting.
... View more
07-06-2018
07:51 AM
1 Kudo
I have created this article for same - https://community.hortonworks.com/articles/201959/override-log4j-property-file-via-oozie-workflow-fo.html Please refer that.
... View more
07-06-2018
07:50 AM
Put log4j on HDFS path and then use HDFS path in workflow to override. Sample log4j file: #
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. See accompanying LICENSE file.
#
# Define some default values that can be overridden by system properties
hadoop.root.logger=DEBUG,CLA
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
#
# ContainerLog Appender
#
#Default values
yarn.app.container.log.dir=null
yarn.app.container.log.filesize=100
log4j.appender.CLA=org.apache.hadoop.yarn.ContainerLogAppender
log4j.appender.CLA.containerLogDir=${yarn.app.container.log.dir}
log4j.appender.CLA.totalLogFileSize=${yarn.app.container.log.filesize}
log4j.appender.CLA.layout=org.apache.log4j.PatternLayout
log4j.appender.CLA.layout.ConversionPattern=%d{ISO8601} %p [%t] %c: %m%n
log4j.appender.CRLA=org.apache.hadoop.yarn.ContainerRollingLogAppender
log4j.appender.CRLA.containerLogDir=${yarn.app.container.log.dir}
log4j.appender.CRLA.maximumFileSize=${yarn.app.container.log.filesize}
log4j.appender.CRLA.maxBackupIndex=${yarn.app.container.log.backups}
log4j.appender.CRLA.layout=org.apache.log4j.PatternLayout
log4j.appender.CRLA.layout.ConversionPattern=%d{ISO8601} %p [%t] %c: %m%n
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
Sample Workflow.xml: <workflow-app name="javaaction" xmlns="uri:oozie:workflow:0.5">
<global>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
</global>
<start to="java-action"/>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="java-action">
<java>
<configuration>
<property>
<name>oozie.launcher.mapreduce.task.classpath.user.precedence</name>
<value>true</value>
</property>
<property>
<name>oozie.launcher.mapreduce.user.classpath.first</name>
<value>true</value>
</property>
<property>
<name>oozie.launcher.mapred.job.name</name>
<value>test</value>
</property>
<property>
<name>oozie.launcher.mapreduce.job.log4j-properties-file</name>
<value>${nameNode}/tmp/log4j.properties</value>
</property>
</configuration>
<main-class>WordCount2</main-class>
<arg>${nameNode}/tmp/input</arg>
<arg>${nameNode}/tmp/output2</arg>
</java>
<ok to="end"/>
<error to="kill"/>
</action>
<end name="end"/>
</workflow-app>
... View more
- Find more articles tagged with:
- How-ToTutorial
- logging
- Oozie
- oozie-java
- Sandbox & Learning
Labels:
07-06-2018
07:34 AM
1 Kudo
Yes. It will fail.
... View more
06-23-2018
04:43 PM
Ensure existing cluster in which you are importing policies do not have duplicate policies already present.
... View more
06-23-2018
07:30 AM
Check whether Standby is becoming active on restarting already active namenode. Region server logs?
... View more
06-23-2018
07:27 AM
Looks like problem with datanodes. Check whether all datanodes are up. Once HDFS/Datanodes are healthy. Then you will be able to start HBase.
... View more
06-22-2018
08:01 PM
Please check whether service accounts are setup properly. https://community.hortonworks.com/content/supportkb/49449/how-to-rename-service-account-users-in-ambari.html
... View more
06-22-2018
02:45 PM
Can you check whether HDFS is healthy. Do you see any missing blocks in namenode? 2018-06-15 14:12:52,958 FATAL [vmbdsiwbdn2:16000.activeMasterManager] master.HMaster: Failed to become active master org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1307428289-10.0.0.4-
... View more
06-22-2018
02:39 PM
Can you attach logs? Posted error is generic.
... View more
06-22-2018
02:28 PM
Remove "oozie.coord.application.path" parameter from job.xml config file when you rerun it. Also ensure you have "oozie.wf.rerun.skip.nodes" property added while rerunning workflow. Command: oozie job -rerun <wf-id> -config /tmp/job.xml
... View more
06-22-2018
02:23 PM
"Server not found in Kerberos database". It is not able to resolve the service principal correctly. --> Ensure your are using FQDN for zookeepers in connection string. --> Ensure that local machine where you are running DBVisualzer is able to resolve(Forward/reverse) the ip/hostnames(FQDN) correctly for zookeepers, region servers and Hbase masters.
... View more
06-22-2018
02:19 PM
Can you check whether phoenix.schema.isNamespaceMappingEnabled is true? If it is false in your environment, then phoenix query creating new table under default namespace of Hbase instead of mapping it. If it is already "true" then execute following to map table--> CREATE TABLE PROD."MYTAB1" (ID VARCHAR PRIMARY KEY, "CF1"."NAME" VARCHAR,"CF1."DEPT" VARCHAR, "CF2".SALARY VARCHAR, "CF2".DESIGNATION VARCHAR) ; If it is false, you have to set it to true and then try mapping table.
... View more
06-22-2018
02:11 PM
1 Kudo
Try again when you have sufficient resources in queues. org.apache.hadoop.security.AccessControlException: Queue root.default already has 10000 applications, cannot accept submission of application: application_1519070798024_136600
... View more
06-17-2018
08:55 AM
https://issues.apache.org/jira/browse/OOZIE-2787 - This is the BUG id which you are hitting. To get rid of this error you have to ensure that duplicate jar file should not be present under oozie.libpath, oozie share lib and spark share lib directories.
... View more
06-12-2018
07:43 PM
It is not right way to execute spark action. It is failing because it is not able to find spark-submit command in cache directory while launching it. Instead use spark action: Link - https://gist.github.com/rajkrrsingh/71f43afaac098428dc614d50ca0293ac
... View more
06-12-2018
07:22 PM
It says : Error:Cluster deploy mode is not compatible with master
<master>${master}</master> <mode>${mode}</mode> below would be correct parameters master=yarn mode=cluster
... View more