Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 590 | 06-04-2025 11:36 PM | |
| 1142 | 03-23-2025 05:23 AM | |
| 572 | 03-17-2025 10:18 AM | |
| 2155 | 03-05-2025 01:34 PM | |
| 1351 | 03-03-2025 01:09 PM |
07-21-2017
07:48 PM
1 Kudo
@Hardik Dave
All clients software should be able to lauch a job on the cluster to be process and view through the RM UI ! For the Hive table its usually in HDFS usually in /user/hive/warehouse/xxx someting like this
hdfs dfs -ls /apps/hive/warehouse/prodtest.db
Found 4 items
drwxrwxrwx - hive hdfs 02016-10-2900:54/apps/hive/warehouse/prodtest.db/t1
drwxrwxrwx - hive hdfs 02016-10-2900:54/apps/hive/warehouse/prodtest.db/t2
drwxrwxrwx - hive hdfs 02016-10-2900:54/apps/hive/warehouse/prodtest.db/t3
drwxrwxrwx - hive hdfs 02016-10-2900:54/apps/hive/warehouse/prodtest.db/t4 You should be able to creat a table on the fly after your processing create database if not exists prodtest;
use prodtest;
--no LOCATION
create table t1 (i int);
create EXTERNAL table t2(i int);
create table t3(i int) PARTITIONED by(b int);
create EXTERNAL table t4(i int) PARTITIONED by(b int);
--with LOCATION create table t5 (i int) LOCATION '/tmp/tables/t5';
create EXTERNAL table t6(i int) LOCATION '/tmp/tables/t6'; Just examples
... View more
07-21-2017
07:36 PM
@manoj kumar I can see the error you forgot the last bit after the hdp.repo ......
wget -nv http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0/hdp.repo -O /etc/yum.repos.d/hdp.repo Once you have run this command then you command should succeed let me know as the repos will be visible 🙂
... View more
07-11-2017
05:18 PM
@Hardik Dave 1. The Edge node usually has all client software like Spark client,hive client installed to interact with the cluster (YARN,Namenode and Datanodes etc ) the edgenode has client config config distributed during the cluster setup so for the hive and spark you will connect i.e to the hive database using jdbc driver your client uses the local hive-site.xml which has the hive database configuration. 2. HDFS is a fault tolerant file system that's why its called a distributed computing, in a production environment you will need at minimum 3 datanodes (Remember although 3x replication is quite common. The actual optimal value would depend on the cost of N-way replication, the cost of failure, and relative probability of failure.) The reason of having at least 3 datanodes is to avoid data loss To launch a Spark application in cluster mode: $ ./bin/spark-submit --class path.to.your.Class --master yarn --deploy-mode cluster [options] <app jar> [app options]
... View more
07-10-2017
01:16 PM
@manoj kumar Please could you try the below yum repolist all
sudo yum-config-manager --enable epel Then rerun sudo yum install atlas-metadata_2_3_*
... View more
07-10-2017
01:03 PM
@Hardik Dave I a real world situation you connect to the edge node which has all the client libraries & configs.ie in a simple 6 node cluster 2 namenodes (HA setup) and 3 datanode (rep factor of 3 ) and 1 edge node is where the client softwares ie hive,flume,sqoop,HFDS are installed and connections to the cluster should be restricted only through the edge node. During deployment of the cluster the jar files are copied to the sharedlib if not it can be done after. You should be able to invoke hive from the edge node As the root user on the edge node # su - hive mm [root@myserver 0]# su - hdfs
[hdfs@myserver ~]$ hive
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.5.0.0-817/0/hive-log4j.properties
hive show databases;
.....
hive> create database olum;
OK
Time taken: 11.821 seconds
hive> use olum;
OK
Time taken: 5.593 seconds
hive>
hive> CREATE TABLE olum (surname string, name string, age INT);
OK
Time taken: 8.024 seconds
hive> INSERT INTO olum VALUES ('James', 'Bond', 22), ('Peter','Welsh', 33);
Query ID = hdfs_20161220082127_de77b9f0-953d-4442-a280-aa93dcc30d9c
Total jobs = 1
Launching Job 1 out of 1
Tez session was closed. Reopening...
Session re-established. You should see something like this
... View more
07-10-2017
12:34 PM
@subash sharma Is Hive-Hook enabled? Check that the hooks are enable in the Ranger config
... View more
06-03-2017
08:53 PM
@Satish Sarapuri How did you look for the db folder? Usually running the below command # su - hdfs
$ hdfs dfs -ls / This should give you ,the most probably hdfs dfs -ls /apps/hive/warehouse/ And also check the below property in the hive-site.xml file <property>
<name>hive.metastore.warehouse.dir</name>
<value>/usr/hive/warehouse </value>
<description>location of the warehouse directory</description>
</property>
... View more
06-02-2017
07:31 PM
@Karan Alang I think there is no better answer than this page I was jsut rading it some hours ago in the train it will just do exactly that ! kafka not in ISR
... View more
06-02-2017
09:53 AM
@Joshua Adeleke
HDP 2.6 uses the innoDB engine instead of MyISAM engine in comparison to earlier versions of HDP till 2.5.3 if you are running Mysql. You will need to change the execution engine to InnoDB so run the below statements to check all the tables while logged on as mysql root user you will need Converting an Existing Table SELECT`ENGINE`FROM`information_schema`.`TABLES`WHERE`TABLE_SCHEMA`='your_database_name'AND`TABLE_NAME`='your_table_name'; check table engine SHOW TABLE STATUS WHERE Name = 'xxx' Change the execution engine ALTER TABLE my_table ENGINE = InnoDB; Then you can restart your database and the errors shouldn't show. Hope that helps
... View more
05-31-2017
02:05 PM
@priyanshu hasija Try the below steps $ klist -kt /etc/security/keytabs/hbase.service.keytab
Keytab name: FILE:/etc/security/keytabs/hbase.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM Then from the above do this $ kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/FQDN@EXAMPLE.COM Now run your job and let me know
... View more