Member since
10-27-2014
38
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2895 | 04-07-2015 01:13 AM | |
16218 | 12-18-2014 02:21 AM |
04-26-2015
09:08 PM
Thanks denloe, The command "hbase hbck -repair" solve 6 inconsistencies. And the step that delete znodes make this command working, i think, because i've been trying "hbase hbck -repair" before but it just stuck. HBase working fine now, thank you.
... View more
04-23-2015
03:20 AM
Thanks Gonzalo, I've been try to delete the "/hbase" folder ( just move it to /tmp), but when i restart Hbase, the hbase master can't start, as i remember is because of user authentication of "/" belong to HDFS, and i don't want to chown "/" to hbase. Even after i create a new "/hbase" and chow it to hbase:hbase, the hbase master still won't start unless i move back the old "/hbase" About znode in zookeeper, i really don't know much about it, i just know my ZooKeeper Znode Parent is "/hbase", do i just delete this folder or i have to delete something elsse ?
... View more
04-23-2015
02:24 AM
Thanks for reply Gonzalo, I can't do anything with hbase right now (can't disable, drop table, can't event view sample table on hue), everything just stuct, so i just manually delete all table data on HDFS, just keep the default(sample) table, but it still won't work.
... View more
04-23-2015
12:51 AM
HBase keep having region in transition: Regions in Transition Region State RIT time (ms) 1588230740 hbase:meta,,1.1588230740 state=FAILED_OPEN, ts=Thu Apr 23 12:15:49 ICT 2015 (8924s ago), server=02slave.mabu.com,60020,1429765579823 8924009 Total number of Regions in Transition for more than 60000 milliseconds 1 Total number of Regions in Transition 1 I've try "sudo -u hbase hbase hbck -repair" and also "unassign 'hbase:meta,,1.1588230740'" but still can't fix the problem.
... View more
Labels:
- Labels:
-
Apache HBase
04-15-2015
09:58 PM
Oh, I finally do it, follow is my hql: SELECT id, part.lock, part.key FROM mytable EXTERNAL VIEW explode(parts) parttable AS part; many thanks chrisf !
... View more
04-15-2015
02:54 AM
Thanks for your reply chrisf, I've been trying to use LATERAL VIEW explode for week but still can't figure how to use it, can you give me an example from my first post. I also try json-serde in HiveContext, i can parse table, but can't querry although the querry work fine in Hive. EX: + In both Hive anh HiveContext, i can parse table: CREATE EXTERNAL TABLE data( parts array<struct<locks:STRING, key:STRING>> ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' LOCATION '/user/hue/...'; + Then in Hive, i can use this: SELECT parts.locks FROM data; but it will return error in HiveContext. Looking forward for reply, thanks !
... View more
04-07-2015
01:27 AM
I have a simple JSON dataset as below. How do I query all parts.lock JSON: {
"id": 1,
"name": "A green door",
"price": 12.50,
"tags": ["home", "green"],
"parts" : [
{
"lock" : "One lock",
"key" : "single key"
},
{
"lock" : "2 lock",
"key" : "2 key"
}
]
} Query: select id,name,price,parts.lock from product The point is if I use parts[0].lock it will return one row as below: {u'price': 12.5, u'id': 1, u'.lock': {u'lock': u'One lock', u'key': u'single key'}, u'name': u'A green door'} But I want to return all the locks in the parts structure. Please help me with this, thanks !
... View more
Labels:
- Labels:
-
Apache Spark
04-07-2015
01:13 AM
Found my solution, i need to add 2 file: + db.hsqldb.properties + dn.hsqldb.script to the oozie job, then the job just work fine, still don't understand why because i don't need these 2 file when import.
... View more
03-11-2015
01:58 AM
Hi, first, i try to import a sample data from MySQL to HDFS, using oozie-sqoop workflow, evething OK. Then i try to export the result back to Mysql, the sqoop export command is OK Next, i use oozie-sqoop workflow and got the error : Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1] I've try manny way: about file mysql-connector-java.....jar + I upload it to HDFS and add it to the file path. + I also upload it to /user/oozie/share/lib/lib_.../sqoop/ and also to /user/oozie/share/lib/sqoop/ and chmod 777 to it. + I also copy it to /opt/cloudera/parcels/CDH-5.3.2.../lib/sqoop/lib/ and to /var/lib/sqoop/ and chmod 777 too here is the job definition: <workflow-app name="sqoop_export" xmlns="uri:oozie:workflow:0.4"> <start to="export_potluck"/> <action name="export_potluck"> <sqoop xmlns="uri:oozie:sqoop-action:0.2"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <command>sqoop export --connect jdbc:mysql://192.168.6.10/mabu --username root --password 123456 --table potluck --export-dir /user/hue/mabu/test_e</command> <file>/user/hue/mabu/oozie/mysql-connector-java-5.1.34-bin.jar#mysql-connector-java-5.1.34-bin.jar</file> </sqoop> <ok to="end"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="end"/> </workflow-app> and here is the job configuration: <configuration> <property> <name>hue-id-w</name> <value>31</value> </property> <property> <name>user.name</name> <value>hue</value> </property> <property> <name>oozie.use.system.libpath</name> <value>true</value> </property> <property> <name>mapreduce.job.user.name</name> <value>hue</value> </property> <property> <name>oozie.wf.rerun.failnodes</name> <value>false</value> </property> <property> <name>nameNode</name> <value>hdfs://00master.mabu.com:8020</value> </property> <property> <name>jobTracker</name> <value>00master.mabu.com:8032</value> </property> <property> <name>oozie.wf.application.path</name> <value>hdfs://00master.mabu.com:8020/user/hue/oozie/workspaces/_hue_-oozie-31-1425982013.65</value> </property> </configuration> Really appreciate the help, thanks !
... View more
Labels:
- Labels:
-
Apache Oozie
02-08-2015
07:55 PM
I've just install Flume service on my cluster using Cloudera Manager, 2 of my agents is working but i got 1 agent bad health with bad health : The Cloudera Manager Agent is not able to communicate with this role's web server. This is the error log: org.apache.flume.node.PollingPropertiesFileConfigurationProvider : Failed to load configuration data. Exception follows. org.apache.flume.FlumeException: Unable to load source type: com.cloudera.flume.source.TwitterSource, class: com.cloudera.flume.source.TwitterSource Can someone help me with this, i'm new to Flume. Regards, Tu Nguyen
... View more
Labels:
- Labels:
-
Apache Flume
-
Cloudera Manager