Member since
09-17-2016
29
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1689 | 10-09-2019 06:15 PM |
04-22-2020
02:59 PM
Hi Can someone please share the link to download Quickstart VM @
... View more
- Tags:
- download
- QuickStart
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
10-09-2019
06:15 PM
I have solved the issue. My Namenode was in safe mode. So i have turned off safe mode by using :- hadoop dfsadmin -safemode leave
... View more
10-09-2019
05:37 PM
10-09-2019
05:32 PM
Yes i didn't notice that it has space. So i used trim for it.
... View more
10-09-2019
05:16 PM
I am using cloudera VB. I am trying to login to beeline. I tried the username and passwod (empty) but it's not working :- !connect jdbc:hive2://localhost:10000/ Connecting to jdbc:hive2://localhost:10000/ Enter username for jdbc:hive2://localhost:10000/: Enter password for jdbc:hive2://localhost:10000/: Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/: java.net.ConnectException: Connection refused (state=08S01,code=0) 0: jdbc:hive2://localhost:10000/ (closed)> @ [cloudera@quickstart ~]$ beeline 2019-10-09 17:06:46,217 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. Beeline version 1.1.0-cdh5.7.0 by Apache Hive beeline> !connect jdbc:hive2://localhost:10000/ scan complete in 5ms Connecting to jdbc:hive2://localhost:10000/ Enter username for jdbc:hive2://localhost:10000/: hadoop Enter password for jdbc:hive2://localhost:10000/: Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/: java.net.ConnectException: Connection refused (state=08S01,code=0) 0: jdbc:hive2://localhost:10000/ (closed)> Can someone know the username and password to login to beeline. I have checked hive metastore and hive server2. it's working :- [cloudera@quickstart ~]$ sudo service hive-metastore status Hive Metastore is running [cloudera@quickstart ~]$ sudo service hive-server2 status Hive Server2 is running [ OK ]
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
10-06-2019
12:26 AM
I am using cloudera virtual box. while creating partitions, it is creating all the partitions whether they are unique or not create table product_order1(id int,user_id int,amount int,product string, city string, txn_date string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; LOAD DATA LOCAL INPATH 'txn' INTO TABLE product_order1;
Loading data to table oct19.product_order1
Table oct19.product_order1 stats: [numFiles=1, totalSize=303] OK Time taken: 0.426 seconds hive>
> set hive.exec.dynamic.partition = true;
hive>
> set hive.exec.dynamic.partition.mode = true;
hive>
> create table dyn_part(id int,user_id int,amount int,product string,city string) PARTITIONED BY(txn_date string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; OK Time taken: 0.14 seconds hive > INSERT OVERWRITE TABLE dyn_part PARTITION(txn_date) select id,user_id,amount,product,city,txn_date from product_order1; Result which i have received :- Loading data to table oct19.dyn_part partition (txn_date=null)
Time taken for load dynamic partitions : 944
Loading partition {txn_date=04-02-2015}
Loading partition {txn_date= 03-04-2015}
Loading partition {txn_date=01-02-2015}
Loading partition {txn_date=03-04-2015}
Loading partition {txn_date= 01-01-2015}
Loading partition {txn_date=01-01-2015}
Loading partition {txn_date= 01-02-2015}
Time taken for adding to write entity : 5 Partition oct19.dyn_part{txn_date= 01-01-2015} stats: [numFiles=1, numRows=1, totalSize=25, rawDataSize=24] Partition oct19.dyn_part{txn_date= 01-02-2015} stats: [numFiles=1, numRows=1, totalSize=25, rawDataSize=24] Partition oct19.dyn_part{txn_date= 03-04-2015} stats: [numFiles=1, numRows=2, totalSize=50, rawDataSize=48] Partition oct19.dyn_part{txn_date=01-01-2015} stats: [numFiles=1, numRows=1, totalSize=26, rawDataSize=25] Partition oct19.dyn_part{txn_date=01-02-2015} stats: [numFiles=1, numRows=1, totalSize=26, rawDataSize=25] Partition oct19.dyn_part{txn_date=03-04-2015} stats: [numFiles=1, numRows=1, totalSize=26, rawDataSize=25] Partition oct19.dyn_part{txn_date=04-02-2015} stats: [numFiles=1, numRows=1, totalSize=25, rawDataSize=24] MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 4.03 sec HDFS Read: 4166 HDFS Write: 614 SUCCESS Total MapReduce CPU Time Spent: 4 seconds 30 msec
... View more
Labels:
- Labels:
-
Apache Hive
11-18-2017
11:22 PM
I tried to use main instead of extend App but still getting the same error.
... View more
11-18-2017
10:20 PM
Hi I am new in Spark. I am working on Standalone. I have
downloaded Eclipse JDK 8 on my local and written a wordcount program in
Scala :- import org.apache.spark.SparkConf
import org.apache.spark.SparkContext object WordCount extends App { val conf = new SparkConf().setAppName("My Word Count Job") val sc = new SparkContext(conf) val file = sc.textFile("/spark/fruits.txt") val words = file.flatMap(x => x.split(",")) val wordsPair = words.map(x => (x, 1)) val wordsCount = wordsPair.reduceByKey(_ + _) val sorted = wordsCount.sortBy(x => x._2, false) sorted.saveAsTextFile("/spark/eclipse_out")
} Getting an error while execution -> spark-submit --master local --class WordCount wcspark.jar Exception in thread "main" java.lang.NoSuchMethodException: WordCount.main([Ljava.lang.String;) at java.lang.Class.getMethod(Class.java:1665) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:716) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Kindly help me out
... View more
Labels:
- Labels:
-
Apache Spark
11-18-2017
02:13 PM
Hi
I am new in Spark. I am working on Cloudera Standalone. I have downloaded Eclipse JDK 8 on my local and written a wordcount program in Scala :-
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object WordCount extends App {
val conf = new SparkConf().setAppName("My Word Count Job")
val sc = new SparkContext(conf)
val file = sc.textFile("/spark/fruits.txt")
val words = file.flatMap(x => x.split(","))
val wordsPair = words.map(x => (x, 1))
val wordsCount = wordsPair.reduceByKey(_ + _)
val sorted = wordsCount.sortBy(x => x._2, false)
sorted.saveAsTextFile("/spark/eclipse_out")
}
Getting an error while execution ->
[cloudera@quickstart ~]$ spark-submit --master local --class WordCount wcspark.jar Exception in thread "main" java.lang.NoSuchMethodException: WordCount.main([Ljava.lang.String;) at java.lang.Class.getMethod(Class.java:1665) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:716) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Kindly help me out
... View more
Labels:
- Labels:
-
Apache Spark
-
Quickstart VM
05-29-2017
07:16 AM
I am facing some issues in HPLSQL.I am using Hive 1.2 version Can you please help me out. 1. TRIM in plsql supports trimming of a pattern from a string.. but TRIM in hplsql removes only spaces but doesnt support removing a pattern. 2. UDFs created on hive on cannot be used in a hplsql script as its antl4 parser doesnt have it in its lexicon
... View more
05-08-2017
08:05 AM
Hi I have installed hplsql and I have created a stored procedure which will drop the table which is in hive but I am getting an eroor when I pass the table name as an argument :- CREATE OR REPLACE PROCEDURE xmCleanupMatcher(IN name STRING) AS BEGIN EXECUTE IMMEDIATE 'DROP TABLE ' || name ; DBMS_OUTPUT.PUT_LINE('Welcome to stored procedure'); END;
/ EXEC xmCleanupMatcher('f_in21'); Here I am passing the table name f_in21 but I am getting an error:- 1:11 cannot recognize input near '<EOF>' '<EOF>' '<EOF>' in table name This table is already present in hive. If I doesn't pass an argument and specify the table name in the starting then I am not getting an error. Kindly help me out
... View more
05-08-2017
07:58 AM
09-30-2016
02:32 PM
Hi The property is already true. As I know, I can pass arguments by two methods :- 1. Passing value through CLI command is = hive -hiveconf current_date=01-01-2015 -f argument.hql Here my script is - select * from glvc.product where date = '${hiveconf:current_date}'; Here my command executes fine and I got the result. 2. Passing arguments In this case , I have already set the value in my script file and I don't want to pass the value through CLI. If I write command = hive -hiveconf:current_date -f argument.hql , I didnt get the result. That's why I had taken a variable earlier. Script - set current_date = 01-01-2015; select * from glvc.product where date = '${hiveconf:current_date}'; I don't know how to use hiveconf in this case where the value is already set. Kindly solve my problem in the case of passing arguments.
... View more
09-26-2016
05:34 PM
Hi I want to pass the parameters to Hive Script:- Note:- I don't want to execute this script by using command line arguments. So I don't want to give any arguments during run time. set current_date = 01-01-2015; select * from glvc.product where date = '${hiveconf:start_date}'; when I use execute the script, I didn't get any result:- [cloudera@quickstart ~]$ hive -hiveconf start_date=current_date -f argument_script.hql 2016-09-26 17:30:18,460 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties OK Time taken: 8.393 seconds WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
... View more
- Tags:
- Hive
Labels:
- Labels:
-
Apache HBase
-
Apache Hive
-
MapReduce
09-22-2016
02:12 PM
Thanks
... View more
09-22-2016
02:08 PM
Thanks. I want to know something. Is it necessary that I need to put the file in /tmp location. Can't it work from anyother location from HDFS. Suppose if I have a file on /hello/employee.txt . Can't I use this employee.txt file from this path to load the data.
... View more
09-21-2016
01:54 AM
Hi I am not getting any error but the data which has loaded in table is showing null values:- load data inpath '/priyanka/txn' into table transaction;
No rows affected (0.568 seconds) select * from transaction;
+-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+
| transaction.sr | transaction.id | transaction.amount | transaction.product | transaction.city | transaction.date |
+-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+
| NULL | NULL | NULL | NULL | NULL | NULL |
| NULL | NULL | NULL | NULL | NULL | NULL |
| NULL | NULL | NULL | NULL | NULL | NULL |
| NULL | NULL | NULL | NULL | NULL | NULL |
| NULL | NULL | NULL | NULL | NULL | NULL |
| NULL | NULL | NULL | NULL | NULL | NULL |
| NULL | NULL | NULL | NULL | NULL | NULL |
+-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ While placing the file in /tmp, it is again showing all the values as NULL @Benjamin Leonhardi
... View more
09-20-2016
06:21 PM
Hi Uploading the data from HDFS or from local works but it shows NULL value in every column instead of data. Kindly help :- while loading the data from HDFS:- 0: jdbc:hive2://localhost:10000> create table transaction(sr int,id int,amount int,product string,city string,date string); No rows affected (0.114 seconds) 0: jdbc:hive2://localhost:10000> show tables; +---------------+--+ | tab_name | +---------------+--+ | transaction | | transaction1 | +---------------+--+ 2 rows selected (0.046 seconds) 0: jdbc:hive2://localhost:10000> load data inpath '/priyanka/txn' into table transaction; No rows affected (0.568 seconds) 0: jdbc:hive2://localhost:10000> select * from transaction; +-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ | transaction.sr | transaction.id | transaction.amount | transaction.product | transaction.city | transaction.date | +-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | +-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ 7 rows selected (0.154 seconds) 2) while loading the data from local:- 0: jdbc:hive2://localhost:10000> LOAD DATA LOCAL INPATH 'home/cloudera/txn' INTO table transaction1; No rows affected (2.394 seconds) 0: jdbc:hive2://localhost:10000> select * from transaction1; +------------------+------------------+----------------------+-----------------------+--------------------+--------------------+--+ | transaction1.sr | transaction1.id | transaction1.amount | transaction1.product | transaction1.city | transaction1.date | +------------------+------------------+----------------------+-----------------------+--------------------+--------------------+--+ | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | +------------------+------------------+----------------------+-----------------------+--------------------+--------------------+--+ 7 rows selected (1.279 seconds)
... View more
- Tags:
- beeline
Labels:
- Labels:
-
HDFS
09-20-2016
05:51 PM
Hi Uploading the data from HDFS or from local works but it shows NULL value in every column instead of data. Kindly help :- while loading the data from HDFS:- 0: jdbc:hive2://localhost:10000> create table transaction(sr int,id int,amount int,product string,city string,date string); No rows affected (0.114 seconds) 0: jdbc:hive2://localhost:10000> show tables; +---------------+--+ | tab_name | +---------------+--+ | transaction | | transaction1 | +---------------+--+ 2 rows selected (0.046 seconds) 0: jdbc:hive2://localhost:10000> load data inpath '/priyanka/txn' into table transaction; No rows affected (0.568 seconds) 0: jdbc:hive2://localhost:10000> select * from transaction; +-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ | transaction.sr | transaction.id | transaction.amount | transaction.product | transaction.city | transaction.date | +-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | +-----------------+-----------------+---------------------+----------------------+-------------------+-------------------+--+ 7 rows selected (0.154 seconds) 2) while loading the data from local:- 0: jdbc:hive2://localhost:10000> LOAD DATA LOCAL INPATH 'home/cloudera/txn' INTO table transaction1; No rows affected (2.394 seconds) 0: jdbc:hive2://localhost:10000> select * from transaction1; +------------------+------------------+----------------------+-----------------------+--------------------+--------------------+--+ | transaction1.sr | transaction1.id | transaction1.amount | transaction1.product | transaction1.city | transaction1.date | +------------------+------------------+----------------------+-----------------------+--------------------+--------------------+--+ | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | | NULL | NULL | NULL | NULL | NULL | NULL | +------------------+------------------+----------------------+-----------------------+--------------------+--------------------+--+ 7 rows selected (1.279 seconds)
... View more
09-19-2016
03:28 PM
I am new in Hadoop. I have cloudera (pseudo mode) in my system and I have downloaded it in Virtual Box. I am unable to connect to Beeline. The error which I am getting:-
[cloudera@quickstart ~]$ beeline
2016-09-18 20:10:18,995 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. Beeline version 1.1.0-cdh5.7.0 by Apache Hive
beeline> show databases;
No current connection
beeline> !connect jdbc:hive2//hostname:10000 scan complete in 5ms scan complete in 6272ms No known driver to handle "jdbc:hive2//hostname:10000"
beeline>
In this, I think I need to download the jdbc driver but while using sqoop with hive, I am able to use jdbc connection. Moreover can u please tell how I can check whether my server2 is running or not. I am working in VB cloudera, so i don't know how to download the things
... View more
- Tags:
- beeline
Labels:
- Labels:
-
Apache Hive
-
MapReduce
-
Quickstart VM
09-17-2016
04:04 PM
Please help me regarding these questions:- Q1 What is Server2 in Hive? Q2 What is the use of jdbc or odbc in Server2? For What purpose server2 is used with jdbc or odbc? Q3 If i want to connect with Hive server2 to jdbc or odbc, how I can connect? Can I connect in my cloudera which is single node? Guide me how to connect with it? Q4 How to connect with Beeline in Cloudera. The commands of Beeline are same or there is any difference. How to connect Beeline with jdbc and odbc?
... View more
- Tags:
- Hive
Labels:
- Labels:
-
Apache Hive