Member since
08-15-2017
31
Posts
29
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3065 | 01-30-2018 07:47 PM | |
1204 | 08-31-2017 02:05 AM | |
1060 | 08-25-2017 05:35 PM |
08-19-2022
04:26 AM
Hello @ssubhas , the above worked however, when we try the same with LazySerde, it is able to escape the delimiter but loads few NULL values at the end. PFB snippet of statement I used: CREATE TABLE test1(5columns string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES( 'separatorChar'='|', 'escapeChar'='\\' ) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'; NOTE: also tried field.delim=|, format.serialization=|. It works when serde properties are not mentioned and we use escape by Clause as you suggested, any way to make it work with LazySerde as well? (Data is Pipe delimited & may also have pipe in the data). Please suggest and help.
... View more
04-15-2018
07:54 AM
@Ramya Jayathirtha Okay,the way it works is : In simple terms,think that The main Thread of your code launches another thread in which your streamingquery logic runs. meanwhile ,your maincode is blocking due to initDF.awaitTermination(). sparkSession.sql("select * from initDF").show() => This code run on the mainthread ,and it reaches there only for the first time. So update your code to : StreamingQuery initDF = df.writeStream() .outputMode("append")
.format("memory") .queryName("initDF")
.trigger(Trigger.ProcessingTime(1000)) .start(); while(initDF.isActive){ Thread.sleep(10000) sparkSession.sql("select * from initDF").show() } Now the main thread of your code will be going through the loop over and over again and it queries the table.
... View more
11-13-2017
05:53 PM
@Dinesh Chitlangia That helped. Thank you..!!
... View more
11-30-2018
02:17 PM
Hello folks, Please help me with the following query: There are two tables T1 and T2 find the sum of price if customer buys all the product how much he has to pay after discount.
Table : T1 ================================ ProductID | ProductName | Price
-------------------------------- 1 | p1 | 1000 2 | p2 | 2000 3 | p3 | 3000 4 | p4 | 4000 5 | p5 | 5000 Table : T2 ======================= ProductID | Disocunt % ----------------------- 1 | 10 2 | 15 3 | 10
4 | 15
5 | 20 , Hello everyone, Please help me with the following query in Hive. There are two tables T1 and T2 find the sum of price if customer buys all the product how much he has to pay after discount.
Table : T1 ================================ ProductID | ProductName | Price
-------------------------------- 1 | p1 | 1000 2 | p2 | 2000 3 | p3 | 3000 4 | p4 | 4000 5 | p5 | 5000 Table : T2
=======================
ProductID | Disocunt % ----------------------- 1 | 10 2 | 15 3 | 10
4 | 15
5 | 20
... View more
10-27-2017
03:58 AM
2 Kudos
The article you mentioned only talks about non-HA scenario. For a HA scenario: 1. You must add one line per namenode. For example if you have 2 namenodes nn1 and nn2 and dfs.internal.nameservices-nnha, then dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nn1=<namenodehost1>:8021
dfs.namenode.servicerpc-address.<dfs.internal.nameservices>.nn2=<namenodehost2>:8021 2. Stop all ZKFC and then from any namenode host run the command: hdfs zkfc -formatZK 3. Restart ZKFC and HDFS. 4. Now you will be able to see metrics in grafana after few minutes.
... View more
10-16-2017
10:45 PM
Thank you @Dinesh Chitlangia. This solved the issue.
... View more
03-06-2018
05:50 PM
@Shu How is number of Mappers/reducers decided for a given query will be decided in runtime ? Is it dependet on how many number of Joins or group by or order by clauses that are used in the query ? If yes, then please let me know how many mappers and reducers are launched for the below query. select name, count(*) as cnt from test group by name order by name;
... View more
09-15-2017
09:50 PM
Thank you @Dinesh Chitlangia !! That helped.
... View more
08-24-2017
10:12 PM
1 Kudo
Thank you..!! It worked.
... View more
08-28-2017
07:49 PM
Thank you @Nandish B Naidu..!! The solution worked.
... View more