Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2626 | 12-25-2018 10:42 PM | |
12055 | 10-09-2018 03:52 AM | |
4164 | 02-23-2018 11:46 PM | |
1837 | 09-02-2017 01:49 AM | |
2166 | 06-21-2017 12:06 AM |
05-12-2017
08:01 AM
Something is wrong with your repositories, or with your Internet connection if you use repos on the Internet. Try to install just one of those packages directly from the command line, apt-get will show you more detailed error than Ambari, and then try to fix whatever is needed until apt-get install works, for example $ apt-get install hadoop-2-6-0-3-8-client Prepend "sudo" if needed. Also try to install a package unrelated to Hadoop like "tree": "sudo apt-get install tree". After both commands work, you can go back to Ambari and try to re-install the cluster.
... View more
04-18-2017
02:28 AM
There are no guarantees about region placement on region servers, even if all you RSs are running like before the "truncate". If some of them are not available HBase master will place regions on available ones.
... View more
04-17-2017
05:42 PM
1 Kudo
"Truncate" alone will remove all information about the region boundaries or what you call "spread of data", including pre-split information if any was provided. However, "import" will recreate the regions exactly as they were at the time of "export", thus effectively preserving region boundaries and the number of regions.
... View more
04-16-2017
04:52 PM
1 Kudo
You can use one of the following regexp_replace(s, "\\[\\d*\\]", "");
regexp_replace(s, "\\[.*\\]", ""); The former works only on digits inside the brackets, the latter on any text. Escapes are required because both square brackets ARE special characters in regular expressions. For example: hive> select regexp_replace("7 September 2015[456]", "\\[\\d*\\]", "");
7 September 2015
... View more
04-12-2017
04:05 PM
What was your query?
... View more
04-11-2017
02:27 AM
You comment helped us too! However, on nodes running only ambari-agent the symlink should point to /usr/lib/ambari-agent/lib/resource_management
... View more
04-08-2017
04:59 AM
Yeah, that will be a lot of work, though Ambari maybe provides some automatism to create required paths based on service name, like /etc/hbase2/conf, /var/run/hbase2, /var/log/hbase2 etc. But still it doesn't sound like the best way to scale out services running on identical binaries (/usr/hdp/current/hbase2-client). Or maybe additional config files can be provided by /etc/hbase/conf2 pointing to /etc/hbase/HDP-VERSION/1? And by the way, we have this system of reputation points, upvoting or accepting helpfule replies. Can you please consider to use it on this post of mine? Tnx.
... View more
04-07-2017
07:16 AM
1 Kudo
Not sure about Spark, but IMO you can do that when you configure HDFS, put SSD nodes in another Ambari config group and set the space not to be used by HDFS.
... View more
04-07-2017
05:41 AM
Okay, now I understand what do you mean by "cashing". Yes, you can remove RAID-1 on SSD's, then you can experiment with One_SSD and All_SSD policies, either way there are multiple replicas, so no need for RAID. And by the way there is no storage policy for NN, if possible it will be good to move 2x400G SSD from NN to worker nodes.
... View more
04-07-2017
04:09 AM
Change the namenode port in your job.properties, it should be: namenode=hdfs://sandbox.hortonworks.com:8020 You may have other errors too. By the way, there are many simple but good examples of Oozie actions in /usr/hdp/current/oozie-client/doc/examples on every Oozie client node. Copy that to your home directory and customize/test actions for your applications. Then copy "examples" to hdfs and test your actions. No need to write actions all by yourself.
... View more