Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4082 | 10-18-2017 10:19 PM | |
| 4336 | 10-18-2017 09:51 PM | |
| 14836 | 09-21-2017 01:35 PM | |
| 1838 | 08-04-2017 02:00 PM | |
| 2417 | 07-31-2017 03:02 PM |
08-15-2016
06:07 PM
oh. If it's delivered by Teradata to you, then it's an appliance. If you bought your own hardware from let's say a vendor like HP, Dell or IBM then it's commodity - although term commodity hardware is not used anymore because people confuse it with personal laptops. A better term is "industry standard" hardware. If you purchase a turnkey solution where everything came installed from the vendor like Teradata, then it's an appliance. If you bought your own hardware and then did an install and whole setup yourself, then it's industry standard hardware (aka commodity hardware).
... View more
08-15-2016
05:14 PM
1 Kudo
@sankar rao not sure if I understand your question. 4 data nodes, 2 masters and 1 edge looks fine. Namenode seem to have a lot of cores and memory. Data nodes look fine. What else is running on those name nodes that require 32 cores and 300 GB RAM? You didn't mention edge node configuration but that can be very flexible. normally I recommend 3 masters - 2 namenodes plus redundancy for everything including Hive metastore, Hiver Server 2, 3 zookeepers (with their own dedicate disks on each node), 3 quorum journal for namenode shared edits and fail over (2x500 GB SAS disks in RAID 1). So, ideally you want three masters. Is that what your question is?
... View more
08-15-2016
04:45 PM
Like @Timothy Spann says, check if you did it under "user dsn" or "system dsn". If it's under system, try it under user dsn.
... View more
08-15-2016
03:52 PM
2 Kudos
@Rajinder Kaur When you setup new ODBC connection in windows, it allows you to "test connection". Were you able to do that? It appears that your ODBC setup on windows is not complete, otherwise it will show up here. Another way to test it, is to use some other SQL tool (like winsql) instead of Excel and test from there if it works.
... View more
08-15-2016
03:03 PM
2 Kudos
@William Bolton When you have Namenode HA enabled, you have what's called a "nameservice". You specify nameservice and let Hadoop configuration take care of connecting to whatever the active namenode is. You don't have to worry about which namenode is active in your client code. By the way, you should use client side configuration files to connect to the cluster. You would specify the following in your hdfs-site.xml when you enable HA so you have a nameservice. dfs.nameservices dfs.ha.namenodes.[nameservice ID] dfs.namenode.rpc-address.[nameservice ID].[name node ID] or dfs.namenode.http-address.[nameservice ID].[name node ID] check the following link. https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html
... View more
08-15-2016
02:55 PM
@Kaliyug Antagonist Is this incremental or one time import? If it's incremental then is it possible that timestamp on some records is getting updated in source which you are not considering in your count?
... View more
08-15-2016
05:28 AM
@Narasimhan Kazhiyur So the jars you are supplying include com.sg.ae.bp.BreakpointPredictionDriver? It could be a dependency issue. You can ship one uber jar that has everything including all dependencies. It think the issue is with shipping the jar with all classes.
... View more
08-15-2016
05:19 AM
@venkat v What type of instances are these? It seems like a simple connection issue. This might just be because of the lower end instances being used. Is this the data node that's down? http://ip-172-31-9-98.ec2.internal:6188/ws/v1/timeline/metrics
... View more
08-15-2016
05:14 AM
@Narasimhan Kazhiyur When you run Spark in Yarn-cluster mode, your application jar needs to be shipped to the cluster. When you run in cluster mode, do you specify your application jar using application-jar option? If not, please check the following link. http://spark.apache.org/docs/latest/submitting-applications.html and following link also to understand how cluster mode works. http://spark.apache.org/docs/latest/submitting-applications.html
... View more
08-15-2016
04:46 AM
1 Kudo
Hi @Constantin Stanca, @Tech Guy I am pretty sure, the jar file is supposed to be in local file system. Unless, you are aware of things getting changed in new versions. Please see the following link. http://stackoverflow.com/questions/20333135/how-to-execute-hadoop-jar-from-hdfs-filesystem
... View more