Member since
11-27-2017
18
Posts
0
Kudos Received
0
Solutions
03-23-2018
03:20 AM
Thank you ,that works!!
... View more
03-22-2018
12:18 AM
I am trying to build pipeline using STREAMSETS which would be pushing records into HDFS and then hive Query will execute . I running query load data inpath '/path/in/hdfc' into table table_name .getting the exception.. Failed to execute queries. Details : Failed to execute query 'load data inpath '/tmp/out/*' into table smsc_data_main_nw_success'. Reason: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask. org.apache.hadoop.hive.ql.metadata.HiveException: Access denied: Unable to move source hdfs://hostname:8020/tmp/out/file_name.done to destination hdfs://hostname:8020/user/hive/warehouse/test.db/table_name: Permission denied by sticky bit: user=anonymous, path="/tmp/out/file_name.done":amank:supergroup:-rw-rw-r--, parent="/tmp/out":amank:supergroup:drwxrwxrwt I tried to these steps hadoop fs -chmod g+w /user/hive/warehouse/test.db but still facing the same issue.
... View more
Labels:
- Labels:
-
Apache Hive
-
HDFS
03-12-2018
06:25 AM
After upating result set still getting the same error. java.sql.SQLFeatureNotSupportedException: [Simba][JDBC](10220) Driver not capable. public String list() throws SQLException, ClassNotFoundException {
List<Biler> bilers = new ArrayList<Biler>();
final String DRIVER_CLASS = "com.cloudera.impala.jdbc4.Driver";
final String CONNECTION_URL = "jdbc:impala://host_name:21050;AuthMech=0;";
Class.forName(DRIVER_CLASS);
Biler biler = null;
Connection connection = DriverManager.getConnection(CONNECTION_URL);
Statement stmt = connection.createStatement( ResultSet.TYPE_FORWARD_ONLY,
ResultSet.CONCUR_READ_ONLY);
ResultSet rs =stmt.executeQuery("select senderid ,procdate ,count(*) as"
+ " count1 from smsc.smsc_data_par group by senderid , procdate");
try{
while (rs.next()) {
biler = new Biler();
biler.setId(rs.getString("senderid"));
biler.setName(rs.getString("procdate"));
biler.setValue(rs.getString("count1"));
bilers.add(biler);
}
}finally{
if (rs != null){
try{
rs.close();
}catch(SQLException e){}
}
if(connection !=null){
try{
connection.close();
}catch(SQLException e){}
}
if(stmt != null){
try{
stmt.close();
}catch(SQLException e ){}
}
}
Gson gson = new Gson();
return gson.toJson(bilers);
}
} Any suggestion how to resolve this issue ?
... View more
03-11-2018
11:53 PM
I tried to use only the "forward-only and read-only ResultSet". Still getting the same error . This is what I have tried:- @GET
@Produces(MediaType.APPLICATION_JSON)
public String list() throws SQLException, ClassNotFoundException {
List<Biler> bilers = new ArrayList<Biler>();
final String DRIVER_CLASS = "com.cloudera.impala.jdbc4.Driver";
final String CONNECTION_URL = "jdbc:impala://SERVER_ADDRESS:21050;AuthMech=0;";
Class.forName(DRIVER_CLASS);
Biler biler = null;
Connection connection = DriverManager.getConnection(CONNECTION_URL);
// Statement stmt = connection.createStatement();
stmt = connection.createStatement( ResultSet.CONCUR_READ_ONLY,
ResultSet.TYPE_FORWARD_ONLY);
ResultSet rs =stmt.executeQuery(" select senderid ,procdate ,count(*) as count1 from smsc.smsc_data_par group by senderid , procdate");
try{
while (rs.next()) {
biler = new Biler();
biler.setId(rs.getString("senderid"));
biler.setName(rs.getString("procdate"));
biler.setValue(rs.getString("count1"));
bilers.add(biler);
}
}finally{
if (rs != null){
try{
rs.close();
}catch(SQLException e){}
}
if(connection !=null){
try{
connection.close();
}catch(SQLException e){}
}
if(stmt != null){
try{
stmt.close();
}catch(SQLException e ){}
}
}
Gson gson = new Gson();
return gson.toJson(bilers);
}
}
... View more
02-07-2018
12:05 AM
I am trying to load data into hive table where delimiter is "||-||" and trying to store into parquet file format . The first step which include loading the data which is done and the next stage when I tried to convert into parquet . I got error "Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found". This is what I have tried: create table sms_testInit (
sMessageId String ,
sResellerName String ,
sDistributorName String ,
sUserName String ,
sSender String ,
sAltSender String ,
sMessage String ,
sType String ,
iLength INT,
sMobileno String ,
iDCostPerSms decimal(10,5) ,
iDCreditsDeducted decimal(10,5) ,
iRCostPerSms decimal(10,5) ,
iRCreditsDeducted decimal(10,5),
iCostPerSms decimal(10,5) ,
iCreditsDeducted decimal(10,5),
iBatchId INT
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe'
WITH SERDEPROPERTIES ("field.delim"="||-||") ;
load data inpath '/path/to_hdfs/raw_data' into table sms_testInit ; At this stage I ain't got any error . while converting this file format to parquet I recieved error : create table sms_test(
sMessageId String ,
sResellerName String ,
sDistributorName String ,
sUserName String ,
sSender String ,
sAltSender String ,
sMessage String ,
sType String ,
iLength INT,
sMobileno String ,
iDCostPerSms decimal(10,5) ,
iDCreditsDeducted decimal(10,5) ,
iRCostPerSms decimal(10,5) ,
iRCreditsDeducted decimal(10,5),
iCostPerSms decimal(10,5) ,
iCreditsDeducted decimal(10,5),
iBatchId INT
) stored as PARQUET;
insert into table sms_test select * from sms_testInit ; Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found I tried to load the hive-contrib-0.14.0.jar but unable to find the classpath .
... View more
Labels:
- Labels:
-
Apache Hive
02-02-2018
06:23 AM
We are trying to build web application using spark for various web applications , i.e returning the result in json which is processed by spark engine or Impala.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
11-29-2017
02:02 AM
Can you please mention the steps?
... View more