Member since
07-25-2016
40
Posts
5
Kudos Received
0
Solutions
12-14-2016
09:49 AM
thanks for your brilliant detailed answer
... View more
12-14-2016
09:48 AM
source sql server table ,destination is hive table i haven't configured any permissions configurations in hadoop yet so my problem is because of polybase limited inserted rows thanks for you
... View more
12-13-2016
02:09 PM
2 Kudos
when we use PolyBase which is sql server 2016 technique
and add an external table to a table in hive
and we want to insert data in this external table =>inserting this data in associated hive table
my question is ?
is there any limit in external table max inserted records
i mean if iam inserting data in external table from another sql server table that has more than 30000 records
i encounter this error
Cannot execute the query "Remote Query" against OLE DB provider
"SQLNCLI11" for linked server "SQLNCLI11". 110802;An internal DMS error
occurred that caused this operation to fail. Details: Exception:
Microsoft.SqlServer.DataWarehouse.DataMovement.Common.ExternalAccess.HdfsAccessException,
Message: Java exception raised on call to
HdfsBridge_DestroyRecordWriter: Error [0
at
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6379)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6344)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:822)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:971)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
] occurred while accessing external file.
while inserting less than 30000 records leads to every thing works ok and data is inserted in hive
will this error becuase of one of the reasons
1- there is a limit in external table insert records number
2- there is a limit in poly base configuration
3- Any other problem in hive
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
12-08-2016
10:55 AM
Another Question Please:what is the benefit of installing hive server on other nodes rather than name node if we choose another node to install hive server rather than name node will hive commands be handled from this node or we should install hive on name node firstly
... View more
12-08-2016
08:57 AM
i don't know alot about i hive
i have explored many tutorials about hive and all are talking about hive commands syntax
but if we want to talk about this echosystem
when we have a cluster and install hive service on namenode only
and create a table in hive and then insert 10 records in
is hive table going to be replicated in all cluster data nodes when replication factor inclueds all data nodes ?
or is it going to be found only in name node and no replication occurs?
should hive be installed in all cluster nodes ?
is automatic Replication is only for Hdfs Files and not for hive?
is hive table equal to hdfs file ?
how is hive table represented and how to find this table when working
with hdfs if we didn't specify location in its creation statement?
are hive table stored blocks able to be understood if we explore like hdfs files are ?
Can you give me links please
... View more
Labels:
- Labels:
-
Apache Hive
12-08-2016
08:51 AM
We know that hadoop main purpose is increasing Performance through adding more data nodes but my question is if we want to retrieve the data only with out the need to process it or analyze it ) is adding more data nodes will be useful or it doesn't increase the performance at all because we have retrieve operations only with out any computations or map reduce jobs
... View more
Labels:
- Labels:
-
Apache Hadoop
07-30-2016
11:49 AM
in my search i always encounter these tools hive ,hue, sqoop and each
one has a specific installation way requirements ,specific operating
system version,specific hadoop version ,specific environment to deal
with like cloudera,ambari but i'm still not able to understand
the relation between these tools ,i mean is hive part of hue or it
can be stand alone tool for data importing and exporting to sql
server, is sqoop is another tool for data processing ,can it be
standalone tool ? i want some one to explain these infrastructure
of hadoop and what is the relation between these tools which i cant
found directly while exploring google,what is the best tool for data
importing and exporting to sql server ?
... View more
Labels:
- Labels:
-
Apache Hadoop
- « Previous
-
- 1
- 2
- Next »