Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2664 | 12-25-2018 10:42 PM | |
12213 | 10-09-2018 03:52 AM | |
4206 | 02-23-2018 11:46 PM | |
1891 | 09-02-2017 01:49 AM | |
2213 | 06-21-2017 12:06 AM |
04-05-2016
04:26 AM
Hi Artem, PHD-3.0 is equivalent to HDP-2.2.4, so a script referring to correct paths should work. What do you mean by "hdp memory script" on Yarn? Edit: phd-configuration-utils.py exists on PHD-3.0, here is a link, scroll down to "PHD Utility Script". Example python phd-configuration-utils.py -c 16 -m 64 -d 4 -k True
where "-c" is the number of cores, "-m" is RAM on worker nodes, "-d" is number of disks, and final Boolean is HBase yes/no.
... View more
04-04-2016
03:34 AM
Hi @Rajendra Vechalapu glad to hear that it worked! If so, can you consider to upvote and/or accept my answer. By the way HCC works similar to "Stack Overflow": It serves as a platform for users to ask and answer questions related to HDP and Hadoop in general,
and, through membership and active participation, to vote questions and
answers up or down, accept answeres and edit questions and answers. In addition to Q&A there are also articles, ideas and repos sections. Tnx!
... View more
04-04-2016
02:10 AM
Hi @suno bella from the root's command line prompt try switching to hdfs user by running "su - hdfs", no password is needed. Then try to run your "cluster" command. As for slave's address use the response from "hostname -f" (run as root).
... View more
04-03-2016
02:29 PM
1 Kudo
@Rajendra Vechalapu, I just realized that all details are not explained anywhere. So on Windows you need an ssh client, you can download Putty for example. Then in Putty type "localhost" as hostname and port 2222 and then try to login. You can also try a web ssh client at http://localhost:4200.
... View more
04-03-2016
01:58 PM
1 Kudo
Hi @Rohit Sureka, I just tried (HDP-2.3.2 sandbox) and for me the timestamp works per documentation, with up to 9 decimal places (nanoseconds). Can you check your input data, and delimiters of your fields, are they as expected. Here is my test. My table: hive> create table ts(id int, t timestamp) row format delimited fields terminated by ',' stored as textfile location '/user/it1/hive/ts';
A few lines of my input file 11,2015-11-01 21:10:00
12,2015-11-01 21:10:00.1
15,2015-11-01 21:10:00.123
And a select/order by command hive> select * from ts order by t;
OK
11 2015-11-01 21:10:00
12 2015-11-01 21:10:00.1
25 2015-11-01 21:10:00.1190011
37 2015-11-01 21:10:00.12
15 2015-11-01 21:10:00.123
31 2015-11-01 21:10:00.1234
17 2015-11-01 21:10:00.12345
19 2015-11-01 21:10:00.123456789
21 2015-11-01 21:10:00.490155
57 2015-11-01 21:10:00.60015589
Time taken: 2.34 seconds, Fetched: 10 row(s)
... View more
04-03-2016
01:55 PM
Login to your sandbox using "ssh -p2222 root@localhost" and then run wget and yum commands from the command line, and all other commands from the tutorial.
... View more
04-03-2016
01:36 PM
Hi @Rajendra Vechalapu, you need to login as root to your Sandbox or a machine where you want to install Spark and run the wget command there. It's also assuming that your OS is CentOS or RHEL-6. I just tried and it works (the URL is still valid). After that run "yum install" and follow all other steps on that page before trying Spark from the command line (test the command line first) or from Zeppelin.
... View more
04-03-2016
06:05 AM
There is no video, it's an animated gif, located here http://i.giphy.com/l4Ki1Ng3uxdTnUTra.gif
... View more
04-03-2016
05:23 AM
@suno bella, no problems, if you found my answers useful please consider to accept and/or up-vote my first answer above. Thanks!
... View more
04-03-2016
05:07 AM
No problems, in that case your Hortonworks VM will be your edge node. Just create a new folder under /usr/share for example.
... View more