Member since
04-10-2014
18
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5199 | 05-14-2014 01:04 AM |
10-09-2014
09:47 AM
Hello, I agree with Jaraxal. I've put the war file of banana in the opt/cloudera/parcels/CDH/lib/solr/webapp and it wass working properly. I however didn't test if it is wiped when deploying a new client configuration in solr, but I doubt it.
... View more
07-02-2014
08:18 AM
How do you put the data in the table ? When I use org.apache.flume.sink.solr.morphline.TwitterSource as type, I get some strange data, starting with Objj Avro.... So the data is not in JSON format but in Avro I guess... So how to create an external table in hive that can parse the avro format ?
... View more
07-01-2014
02:18 AM
We just need to use Hive to create the impala table...
... View more
06-27-2014
12:32 AM
Yep, the keytab created had not the correct permission, I forgot it !
... View more
06-26-2014
08:00 AM
1 Kudo
I changed the path first, and didn't work, then I changed the permissions (even for directories and subdirectories) and then it worked, thanks.
... View more
06-23-2014
07:44 AM
Hello, When I try to link the external table to impala from hbase i get: CREATE EXTERNAL TABLE HB_IMPALA_TWEETS ( > id int, > id_str string, > text string, > created_at timestamp, > geo_latitude double, > geo_longitude double, > user_screen_name string, > user_location string, > user_followers_count string, > user_profile_image_url string > > ) > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > "hbase.columns.mapping" = > ":key,tweet:id_str,tweet:text,tweet:created_at,tweet:geo_latitude,tweet:geo_longitude, user:screen_name,user:location,user:followers_count,user:profile_image_url" > ) > TBLPROPERTIES("hbase.table.name" = "tweets"); Query: create EXTERNAL TABLE HB_IMPALA_TWEETS ( id int, id_str string, text string, created_at timestamp, geo_latitude double, geo_longitude double, user_screen_name string, user_location string, user_followers_count string, user_profile_image_url string ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ( "hbase.columns.mapping" = ":key,tweet:id_str,tweet:text,tweet:created_at,tweet:geo_latitude,tweet:geo_longitude, user:screen_name,user:location,user:followers_count,user:profile_image_url" ) TBLPROPERTIES("hbase.table.name" = "tweets") ERROR: AnalysisException: Syntax error in line 1: ...image_url string ) STORED BY 'org.apache.hadoop.hive.h... ^ Encountered: BY Expected: AS CAUSED BY: Exception: Syntax error Any Idea why it is not working ? Do I need to add a JAR ?
... View more
06-19-2014
08:12 AM
Hello, I am trying to configure my twitter agent for flume on a kerberized cluster. I followed the security manual, adding : agentName.sinks.sinkName.hdfs.kerberosPrincipal = flume/fully.qualified.domain.name@YOUR-REALM.COM
agentName.sinks.sinkName.hdfs.kerberosKeytab = /etc/flume-ng/conf/flume.keytab with my own values. As Kerberos principal I created both flume@HADDOP.COM and flume/_HOST@HADOOP.COM kadmin.local: ktadd -k /etc/flume-ng/conf/flume.keytab flume/evl2400469.eu.verio.net@HADOOP.COM Entry for principal flume/evl2400469.eu.verio.net@HADOOP.COM with kvno 2, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/flume-ng/conf/flume.keytab. Entry for principal flume/evl2400469.eu.verio.net@HADOOP.COM with kvno 2, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/flume-ng/conf/flume.keytab [root@evl2400469 ~]# kinit -p flume/evl2400469.eu.verio.net@HADOOP.COM Password for flume/evl2400469.eu.verio.net@HADOOP.COM: [root@evl2400469 ~]# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: flume/evl2400469.eu.verio.net@HADOOP.COM [root@evl2400469 ~]# ls -l /etc/flume-ng/conf/ total 16 -rw-r--r-- 1 root root 0 Mar 28 08:14 flume.conf -rw-r--r-- 1 root root 1661 Mar 28 08:14 flume-conf.properties.template -rw-r--r-- 1 root root 1197 Mar 28 08:14 flume-env.sh.template -rw-r----- 1 root root 234 Jun 19 16:18 flume.keytab -rw-r--r-- 1 root root 3074 Mar 28 08:14 log4j.properties Did I miss something in the configuration ? I have this error: Sink HDFS has been removed due to an error during configuration
java.lang.IllegalArgumentException: The keyTab file: /etc/flume-ng/conf/flume.keytab is nonexistent or can't read. Please specify a readable keytab file for Kerberos auth.
at org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:542)
at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:247)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:418)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Thanks for helping me 🙂
... View more
Labels:
- Labels:
-
Apache Flume
05-14-2014
01:04 AM
It seems that the base pattern is mandatory ! Even if it is not specidfied in the documentation 🙂 So I added the base pattern "dc=example,dc=com" and it worked.
... View more
05-12-2014
02:17 AM
Hello, I am having trouble to connect to cloudera Manager with a user from LDAP. I configured a ldap server on the local machine, so the URI in cloudera Manager is ldap://localhost/dc=example,dc=com My ACL should allow anonymous auth: access to attrs="userPassword"
by anonymous auth
by self write
by * none
access to *
by dn="uid=admin,dc=example,dc=com" write
by self write
by users read
by anonymous auth When I do a search manually I can find the user: [root@evl2400469 openldap]# ldapsearch -x -L -b "ou=people,dc=example,dc=com" -s sub -H ldap://localhost
version: 1
#
# LDAPv3
# base <ou=people,dc=example,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# people, example.com
dn: ou=people,dc=example,dc=com
objectClass: organizationalUnit
ou: people
# toto1, people, example.com
dn: uid=toto1,ou=people,dc=example,dc=com
objectClass: inetOrgPerson
uid: toto1
givenName: Toto1
sn: tt1
cn: Toto1
o: Example
title: System Administrator
userPassword:: e1NTSEF9T0xKaFNiaG9xOUlJTFY1YU9vQ0JzZVp3MDlUaTB1Rmgg
# search result
# numResponses: 3
# numEntries: 2 I am using this pattern: uid={0},ou=people,dc=example,dc=com I tried with and without LDAP Bind User Distinguished Name and LDAP Bind Password also. But it seems it can't find it: It says: "user name or password not valid" I am trying to connect with: dn: uid=toto1,ou=people,dc=example,dc=com
objectClass: inetOrgPerson
uid: toto1
givenName: Toto1
sn: tt1
cn: Toto1
userPassword: {SSHA}OLJhSbhoq9IILV5aOoCBseZw09Ti0uFh
o: Example I use "toto1" as username and "password" as password (I used slappassword -h {SSHA} -s "password" to generate the password). I really don't see where the problem is. Can you help me ? thanks. Regards, Kevin. Here are some logs from ldap server : May 12 15:38:39 evl2400469 slapd[14256]: conn=14 fd=11 ACCEPT from IP=127.0.0.1:33908 (IP=0.0.0.0:389) May 12 15:38:39 evl2400469 slapd[14256]: conn=14 op=0 BIND dn="cn=admin,dc=example,dc=com" method=128 May 12 15:38:39 evl2400469 slapd[14256]: conn=14 op=0 BIND dn="cn=admin,dc=example,dc=com" mech=SIMPLE ssf=0 May 12 15:38:39 evl2400469 slapd[14256]: conn=14 op=0 RESULT tag=97 err=0 text= May 12 15:38:39 evl2400469 slapd[14256]: conn=14 op=1 SRCH base="" scope=2 deref=3 filter="(member=uid=toto1,ou=people,dc=example,dc=com)" May 12 15:38:39 evl2400469 slapd[14256]: conn=14 op=1 SRCH attr=cn objectClass javaSerializedData javaClassName javaFactory javaCodeBase javaReferenceAddress javaClassNames javaRemoteLocation May 12 15:38:39 evl2400469 slapd[14256]: conn=14 op=1 SEARCH RESULT tag=101 err=32 nentries=0 text= May 12 15:41:15 evl2400469 slapd[14256]: conn=15 fd=12 ACCEPT from IP=127.0.0.1:34083 (IP=0.0.0.0:389) May 12 15:41:15 evl2400469 slapd[14256]: conn=15 op=0 BIND dn="uid=toto1,ou=people,dc=example,dc=com" method=128 May 12 15:41:15 evl2400469 slapd[14256]: conn=15 op=0 BIND dn="uid=toto1,ou=people,dc=example,dc=com" mech=SIMPLE ssf=0 May 12 15:41:15 evl2400469 slapd[14256]: conn=15 op=0 RESULT tag=97 err=0 text= May 12 15:41:15 evl2400469 slapd[14256]: conn=15 op=1 SRCH base="uid=toto1,ou=people,dc=example,dc=com" scope=0 deref=3 filter="(objectClass=*)" May 12 15:41:15 evl2400469 slapd[14256]: conn=15 op=1 SEARCH RESULT tag=101 err=0 nentries=1 text= May 12 15:41:15 evl2400469 slapd[14256]: conn=15 op=2 UNBIND May 12 15:41:15 evl2400469 slapd[14256]: conn=15 fd=12 closed May 12 15:41:15 evl2400469 slapd[14256]: conn=14 op=2 SRCH base="" scope=2 deref=3 filter="(member=uid=toto1,ou=people,dc=example,dc=com)" May 12 15:41:15 evl2400469 slapd[14256]: conn=14 op=2 SRCH attr=cn objectClass javaSerializedData javaClassName javaFactory javaCodeBase javaReferenceAddress javaClassNames javaRemoteLocation May 12 15:41:15 evl2400469 slapd[14256]: conn=14 op=2 SEARCH RESULT tag=101 err=32 nentries=0 text=
... View more
Labels:
- Labels:
-
Cloudera Manager
04-29-2014
07:27 AM
1 Kudo
Hello, I found out that for this exception : Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:120)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:230)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:85)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1059)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:920)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
at java.lang.Thread.run(Thread.java:744) I need to clean the /habse in zookeper but I didn't find it. I didn't find how to remove a ZNODE, is it located in the HDFS ? Found some clue here : http://web.archiveorange.com/archive/v/WCpTuGEshqKoq2ZC4X3L (For the record, I got this error after deleting all services and reinstalling them one by one). Thanks for your help 🙂
... View more
Labels:
- Labels:
-
Apache HBase