Member since
12-06-2016
40
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1770 | 01-03-2017 02:53 PM | |
2709 | 12-29-2016 05:02 PM | |
8655 | 12-22-2016 06:34 PM |
01-03-2017
02:53 PM
@Jay SenSharma Hi, I added the IP@ of the VDS to my local hosts file and it's done 🙂 Thank you for your help.
... View more
01-03-2017
01:05 PM
@Jay SenSharma The current cluster is composed by one machine. Please note that the machine is a remote VDS (virtual dedicated server) that i can access only via ssh or web. my local machine is windows 10. Following the output of hostname commands root@vds001:~# hostname vds001 root@vds001:~# hostname -f vds001.databridge.tn following the output of the API call : http://197.12.8.49:8080/api/v1/clusters/DataBridge/hosts/ {
"href" : "http://197.12.8.49:8080/api/v1/clusters/DataBridge/hosts/",
"items" : [
{
"href" : "http://197.12.8.49:8080/api/v1/clusters/DataBridge/hosts/vds001.databridge.tn",
"Hosts" : {
"cluster_name" : "DataBridge",
"host_name" : "vds001.databridge.tn"
}
}
] } following the zeppellin UI call : http://197.12.8.49:9995 (i can't put the hostname instead of the ip@) it works --> should i configure the host file in my local machine? Regards,
... View more
01-03-2017
09:44 AM
Hi, When trying to access zeppelin view from ambari, i got the following error: hostname’s server DNS address could not be found. I can access it through separate page via port 9995. But I would know why it's impossible from ambari. Please find enclosed the screen shot.zeppelin-view-1.jpgzeppelin-view-2.jpg Ragards,
... View more
Labels:
- Labels:
-
Apache Zeppelin
12-29-2016
05:02 PM
Some errors in lab --> Pig Script must be as follow: a = LOAD 'geolocation' USING org.apache.hive.hcatalog.pig.HCatLoader(); b = FILTER a BY event != 'normal'; c = FOREACH b GENERATE driverid, (int) '1' as occurance; d = GROUP c BY driverid; e = FOREACH d GENERATE group as driverid, SUM(c.occurance) as totevents; g = LOAD 'drivermileage' using org.apache.hive.hcatalog.pig.HCatLoader(); h = join e by driverid, g by driverid; final_data = foreach h generate $0 as driverid, $1 as totevents, $3 as totmiles, (float) $3/$1 as riskfactor; store final_data into 'riskfactor' using org.apache.hive.hcatalog.pig.HCatStorer(); riskfactor table in Hive must be as follow: CREATE TABLE riskfactor (driverid string,totevents bigint,totmiles double,riskfactor float)
STORED AS ORC;
... View more
12-29-2016
10:51 AM
@milind pandit My problem is not linked to data type. Please find enclosed the entire log file.job-1482423183850-0021-logs.txt
... View more
12-29-2016
10:27 AM
@WeiHsiang Tseng, Hi, I'm facing the same problem. did you resolve it? Thanks.
... View more
12-29-2016
09:42 AM
Hi, using ambari 2.4.1.0 and HDP 2.5 I'm trying to execute the first lab instruction : a = LOAD 'geolocation' USING org.apache.hive.hcatalog.pig.HCatLoader(); I add the following argument to let Pig know the HCatLoader() class : -useHCatalog I get the following log : can any one help me to fix this? thanks. ls: cannot access /hadoop/yarn/local/usercache/admin/appcache/application_1482423183850_0022/container_1482423183850_0022_01_000002/hive.tar.gz/hive/lib/slf4j-api-*.jar: No such file or directory
ls: cannot access /hadoop/yarn/local/usercache/admin/appcache/application_1482423183850_0022/container_1482423183850_0022_01_000002/hive.tar.gz/hive/hcatalog/lib/*hbase-storage-handler-*.jar: No such file or directory
WARNING: Use "yarn jar" to launch YARN applications.
16/12/29 10:28:37 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
16/12/29 10:28:37 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
16/12/29 10:28:37 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2016-12-29 10:28:37,537 [main] INFO org.apache.pig.Main - Apache Pig version 0.16.0.2.5.3.0-37 (rexported) compiled Nov 30 2016, 02:28:11
2016-12-29 10:28:37,537 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/yarn/local/usercache/admin/appcache/application_1482423183850_0022/container_1482423183850_0022_01_000002/pig_1483003717522.log
2016-12-29 10:28:38,970 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/yarn/.pigbootup not found
2016-12-29 10:28:39,216 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://vds002.databridge.tn:8020
2016-12-29 10:28:41,059 [main] INFO org.apache.pig.PigServer - Pig Script ID for the session: PIG-script.pig-9b551f9a-3393-4ab2-93ea-de21982a11cc
2016-12-29 10:28:42,237 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://vds002.databridge.tn:8188/ws/v1/timeline/
2016-12-29 10:28:42,704 [main] INFO org.apache.pig.backend.hadoop.PigATSClient - Created ATS Hook
2016-12-29 10:28:44,448 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
2016-12-29 10:28:44,521 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://vds002.databridge.tn:9083
2016-12-29 10:28:44,588 [main] INFO hive.metastore - Connected to metastore.
2016-12-29 10:28:45,238 [main] INFO org.apache.pig.Main - Pig script completed in 8 seconds and 278 milliseconds (8278 ms)
... View more
Labels:
- Labels:
-
Apache Pig
12-22-2016
06:34 PM
2 Kudos
@Jay SenSharma the real problem is the namenode heap of memory. When History Server try to start, The memory usage of the NameNode climbs quickly to exceed the limit of 1 Gega byte (default configuration) and causes the service to fall. When changing max memory heap to 3 Gb it works fine. I installed previously ambari 2.4.0.1 and i don't seen this behaviour (2.4.2.0 same behaviour as 2.4.1.0). Do you know why?
... View more
12-21-2016
07:10 PM
@Jay SenSharma hostname -f : vds002.databridge.tn netstat -tnlpa | grep 50070 : nothing root@vds002:~# netstat -tnlpa | grep 50070 root@vds002:~# how to enable communication on this port. trying : firewall-cmd --add-port 50070/tcp --permanent but no effect.
... View more
12-21-2016
03:03 PM
Hi Jay, Thanks for the help. no files found under /var/log/hadoop-mapreduce/mapred the command doesn't works : curl: (7) Failed connect to vds002.databridge.tn:50070; Connection refused The cluster is one machine. The firewalld is disabled. any idea plz?
... View more
- « Previous
-
- 1
- 2
- Next »