Member since
10-04-2016
30
Posts
3
Kudos Received
0
Solutions
06-04-2018
08:49 AM
Thank you Sandeep. That was so quick ! Saved my day.
... View more
06-04-2018
08:26 AM
I am trying to load the hbase table from a data file on my local path but it fails with below error. hbase(main):010:0> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator="," -Dimporttsv.columns="HBASE_ROW_KEY,events:driverId,events:driverName,events:eventTime,events:eventType,events:latitudeColumn,events:longitudeColumn,events:routeId,events:routeName,events:truckId" irfan_ns:driver_dangerous_event file:///home/aziz/driver
SyntaxError: (hbase):10: syntax error, unexpected tIDENTIFIER
... View more
Labels:
- Labels:
-
Apache HBase
10-20-2017
01:15 AM
You can search it from the console using command $ locate *hive-hcatalog-core*.jar
... View more
09-13-2017
12:35 AM
Hi , One column is giving an error when i try to retrieve it in qlikview from Hive table. Though its queriable in Hive itself. I tried to cast it in different way but to no avail. I am not sure what could be the issue. SQL##f - SqlState: S1000, ErrorCode: 110, ErrorMsg: [Cloudera][ImpalaODBC] (110) Error while executing a query in Impala: [HY000] : AnalysisException: Unsupported type in 't_wfm.wfm_time_step'. SQL SELECT cast(`wfm_time_step` as DATE) FROM IMPALA.`test_fin_base`.`t_wfm` First i kept the data type as string it failed and later i change it to timestamp, still the same issue.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
01-04-2017
12:21 PM
After setting the flume agent with http-source type and sink as hdfs. I am doing some get and post requests using the url of my flume source ( http-source). >>> import json
>>> b={'a':'b'}
>>> a=json.dumps(b)
>>> a
'{"a": "b"}'
>>> res=requests.post('http://hdp.localdomain:41414',data=a)
>>> res.status_code
400
>>> res=requests.get('http://hdp.localdomain:41414')
>>> res.status_code
500
But i am not getting ok status_code for my requests. I have two question. One is the url of the flume source correct(). http://hdp.localdomain:41414 hostname:portnumber. The port is defined in the agent configuration. The agent log shows its running oki. 2nd question is how should i know what type of data i can pass. In the above i passed a as a json. Due to these issue i cannot see anything on the sink. I am really stuck here. I dont know where is the deadlock
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Flume
-
Apache Hadoop
12-29-2016
11:19 AM
@Kuldeep Kulkarni @Sumesh @Michael Young
Thank you guys the issue was with the host file . Since i am using cloud server that has two ip and i was using the external ip which was wrong. After adding the internal ip to the hosts file it went perfect and RM was oki and the program run perfectly.
... View more
12-20-2016
07:55 PM
1 Kudo
I am running the following example jar file sudo yarn jar /usr/hdp/2.3.6.0-3796/hadoop-mapreduce/hadoop-mapreduce-examples-2.7.1.2.3.6.0-3796.jar wordcount /data/input/word.txt /data/out But it gives some connection failed error. INFO client.RMProxy: Connecting to ResourceManager at hadoop.localdomain/127.0.1.1:8050 Retrying connect to server: hadoop.localdomain/127.0.1.1:8050. Here is something from the log /var/log/hadoop-yarn/yarn/yarn-yarn-resourcemanager-hadoop.log INFO zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(1019)) - Opening socket connection to server hadoop.localdomain/127.0.1.1:2181. I know i need RM to run a mapreduce job. Now i checked and i found from ambari gui that my resource manager is also not working. I have one confusion. in yarn-site.xml file the value for yarn resource is using the hadoop.localdomain which points to 127.0.1.1 in the host file. <name>hadoop.registry.zk.quorum</name>
<value>hadoop.localdomain:2181</value>
I am not sure if this hadoop.localdomain is the right hostname since it points to 127.0.1.1 and i can only access the ambari gui through my browser at 192.130.3.21:8080 ubuntu@hadoop:~$ more /etc/hosts
127.0.0.1 localhost
192.130.3.21 localhost2
127.0.1.1 hadoop.localdomain hadoop
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts I am able to access the ambari using the ip 192.130.3.21:8080 Does anyone know what show be the configuration(hadoop.localdomain or else). I dont know what is wrong here.
... View more
Labels: