Member since
09-24-2015
527
Posts
136
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2316 | 06-30-2017 03:15 PM | |
3359 | 10-14-2016 10:08 AM | |
8619 | 09-07-2016 06:04 AM | |
10414 | 08-26-2016 11:27 AM | |
1559 | 08-23-2016 02:09 PM |
03-04-2016
06:54 AM
2 Kudos
Hi:
I am trying to up 5 h2o hadoop nodes, bu i am receiving this error, but with just 1 node is working
[ [root@lnxbig01 hadoop]# hadoop jar h2odriver_hdp2.1.jar water.hadoop.h2odriver -libjars ../h2o.jar -mapperXmx 2g -nodes 2 -output /tmp/h2o
WARNING: Use "yarn jar" to launch YARN applications.
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 10.1.246.15]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 10.1.246.15:36832
(You can override these with -driverif and -driverport.)
Driver program compiled with MapReduce V1 (Classic)
Memory Settings:
mapred.child.java.opts: -Xms2g -Xmx2g
mapred.map.child.java.opts: -Xms2g -Xmx2g
Extra memory percent: 10
mapreduce.map.memory.mb: 2252
16/03/04 07:53:32 INFO impl.TimelineClientImpl: Timeline service address: http://lnxbig06.cajarural.gcr:8188/ws/v1/timeline/
16/03/04 07:53:32 INFO client.RMProxy: Connecting to ResourceManager at lnxbig05.cajarural.gcr/10.1.246.19:8050
16/03/04 07:53:34 INFO mapreduce.JobSubmitter: number of splits:2
16/03/04 07:53:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456130415890_1037
16/03/04 07:53:34 INFO impl.YarnClientImpl: Submitted application application_1456130415890_1037
16/03/04 07:53:34 INFO mapreduce.Job: The url to track the job: http://lnxbig05.cajarural.gcr:8088/proxy/application_1456130415890_1037/
Job name 'H2O_64985' submitted
JobTracker job ID is 'job_1456130415890_1037'
For YARN users, logs command is 'yarn logs -applicationId application_1456130415890_1037'
Waiting for H2O cluster to come up...
H2O node 10.1.246.18:54323 requested flatfile
H2O node 10.1.246.15:54321 requested flatfile
Sending flatfiles to nodes...
[Sending flatfile to node 10.1.246.18:54323]
[Sending flatfile to node 10.1.246.15:54321]
H2O node 10.1.246.18:54323 reports H2O cluster size 1
H2O node 10.1.246.15:54321 reports H2O cluster size 1
H2O node 10.1.246.15:54321 on host 10.1.246.15 exited with status -1
ERROR: At least one node failed to come up during cluster formation
ERROR: H2O cluster failed to come up
Attempting to clean up hadoop job...
16/03/04 07:53:51 INFO impl.YarnClientImpl: Killed application application_1456130415890_1037
Killed.
Please any suggestions why with more than one node it fail?? Many thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
03-03-2016
02:34 PM
2 Kudos
Hi. I am doing a wordcloud, but i dont know if there is any function to delete bad words like "aa", "bb", "wf", or somenthing like that. thanks
... View more
Labels:
- Labels:
-
Apache Pig
03-02-2016
04:02 PM
Hi: Actually i have installed the hdp here: [root@a01hop01 usr]# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 12G 39G 23% / but y have also another FS: [root@a01hop01 usr]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 12G 39G 23% /
/dev/mapper/centos-home 147G 1.9G 145G 2% /home
Its posible to create a simbolink link or somenthing to move the installation??? every day there is less space Thanks
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
03-01-2016
08:13 PM
1 Kudo
Hi: I have a question about layers on HDFS. If i need to make subproducts, is better proccess with pig, spark or R the Virgin files and convert it into transformed files and insert in hive, o better attack the virgin files and show with any analitic sofware?? thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
03-01-2016
08:05 PM
3 Kudos
Hi: the last solution was fine, but also work this: orders4 = FOREACH orders3 GENERATE $0 as freq, (chararray) ((word matches '.*..*') ? SUBSTRING(word,INDEXOF(word,'.',0)+1,LAST_INDEX_OF(word,'.')) : $1) as word;
... View more
03-01-2016
05:46 PM
3 Kudos
HI: its posible to split in pig www.amazon.es? iam doing this but doest work: orders4 = FOREACH orders3 GENERATE $0 as freq, STRSPLIT($1,'.') as word;
its just print thr $0. any suggestions??
... View more
Labels:
- Labels:
-
Apache Pig
02-25-2016
12:55 PM
1 Kudo
Hi: Yes its the tutorial i follow, but i think the sparkR lib doestn exist, on my server:, i have install R correclty long time ago [root@lnxbig05 spark]# pwd
/usr/hdp/2.3.2.0-2950/spark
[root@lnxbig05 spark]# ls -lrt
total 120
-rwxr-xr-x 1 root root 3624 Sep 30 23:32 README.md
-rwxr-xr-x 1 root root 507 Sep 30 23:32 README
-rwxr-xr-x 1 root root 22559 Sep 30 23:32 NOTICE
-rwxr-xr-x 1 root root 50971 Sep 30 23:32 LICENSE
drwxr-xr-x 2 root root 4096 Oct 1 00:33 doc
-rwxr-xr-x 1 root root 61 Oct 1 00:33 RELEASE
drwxr-xr-x 3 root root 4096 Jan 27 18:10 data
drwxr-xr-x 3 root root 4096 Jan 27 18:10 examples
drwxr-xr-x 3 root root 4096 Jan 27 18:10 ec2
drwxr-xr-x 2 root root 4096 Jan 27 18:11 lib
lrwxrwxrwx 1 root root 19 Jan 27 18:11 work -> /var/run/spark/work
drwxr-xr-x 2 root root 4096 Jan 27 18:11 sbin
drwxr-xr-x 6 root root 4096 Jan 27 18:11 python
lrwxrwxrwx 1 root root 25 Jan 27 18:11 conf -> /etc/spark/2.3.2.0-2950/0
drwxr-xr-x 2 root root 4096 Feb 25 13:07 bin
and mi libs
-rwxr-xr-x 1 root root 1809447 Oct 1 00:06 datanucleus-rdbms-3.2.9.jar
-rwxr-xr-x 1 root root 1890075 Oct 1 00:06 datanucleus-core-3.2.10.jar
-rwxr-xr-x 1 root root 339666 Oct 1 00:06 datanucleus-api-jdo-3.2.6.jar
-rwxr-xr-x 1 root root 93644684 Oct 1 00:18 spark-examples-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar
-rwxr-xr-x 1 root root 4154588 Oct 1 00:22 spark-1.4.1.2.3.2.0-2950-yarn-shuffle.jar
-rwxr-xr-x 1 root root 167557539 Oct 1 00:30 spark-assembly-1.4.1.2.3.2.0-2950-hadoop2.7.1.2.3.2.0-2950.jar
So, i dont know if I need any lib more. Thanks
... View more
02-25-2016
12:30 PM
2 Kudos
hi: I am testing sparkR from HDP and i got this error from command line
./sparkR
> Sys.setenv(SPARK_HOME = "/usr/hdp/2.3.2.0-2950/spark");
> .libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
> sc <- sparkR.init()
Error: could not find function "sparkR.init"
Please i need to do somenthing more???
... View more
Labels:
- Labels:
-
Apache Spark
02-23-2016
03:14 PM
2 Kudos
Hi: my data science want to use spark, so he need an IDE or somenthing, he want to use R libraries, any idea?? Thanks
... View more
Labels:
- Labels:
-
Apache Spark