Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Kudu and Kerberos

avatar
Explorer

Hi,

I’m on CDH 5.13.0 with kudu 1.5.0. I have a problem when i enable kerberos authentication on kudu. Kerberos authentication work fine on other components (hbase, hdfs, impala, etc…).

 

When I try to create a table on kudu storage with hue or impala-shell, I have an error 

 

Query :

 

create table kudu_db.test3 (

row_id string,

test string,

primary key (row_id)

)

partition by hash (row_id) partitions 8

stored as kudu

 

Error :

 

ImpalaRuntimeException: Error creating Kudu table 'impala::s4do05k0_p04.test3' CAUSED BY: NonRecoverableException: Not enough live tablet servers to create a table with the requested replication factor 3. 0 tablet servers are alive.

 

In cloudera manager/Kudu Master Web UI/ « Tablet Servers » tab, i have this :

2018-08-02 12_05_38-Kudu.png

 

If I disable kerberos, I have this :

 2018-08-02 11_49_58-Kudu.png

Configuration in Cloudera Manager

2018-08-02 12_07_15-Kudu - Cloudera Manager.png

 

Create table doesn’t work but I can select on existing table...

 

Anyone can help me please ?

 

Best regards

 

 

21 REPLIES 21

avatar
Explorer

Just as a quick check, you are looking to setup Kerberos between the Kudu master, tablet servers and Kudu client yes? (i.e. Not Kerberos authentication from a user client via Impala, as this is not setup here.)

 

If so, have you setup the keytab requirements etc. as per: https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_security.html#concept_syg_k35_l... ?

avatar
Explorer

Hi,

 

Yes I want setup Kerberos between the Kudu master, tablet servers and Kudu client.

 

Kerberos and TLS/SSL are enable and work fine on all others components of the cluster.

 

That 's this setting that I Implemented.

 

Best regards

avatar
Super Collaborator

It sounds like Impala might be configured to talk to the wrong master, or one of the Kudu masters is stuck and needs to be repaired.

 

1) How many Kudu master servers are you running?

 

2) Do you see any error messages in the Kudu master log file(s)?

 

3) Do you see any errors when you run the following command?

 

sudo -u kudu kudu cluster ksck <master-addresses>

See https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_administration_cli.html#ksck for documentation on running ksck.

 

 

4) Is Impala configured with the correct --kudu_master_hosts flag? It should be configured to talk to all of the masters. See https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_impala.html for documentation on that.

avatar
Explorer

Hi

1) How many Kudu master servers are you running?
=> Only One

2) Do you see any error messages in the Kudu master log file(s)?
=> No error in kudu-master.INFO

3) Do you see any errors when you run the following command?

 

sudo -u kudu kudu cluster ksck <master-addresses>

See https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_administration_cli.html#ksck for documentation on running ksck.

=> yes a lot...
 
 ...
 Tablet 6363250dcd7a47c4b5c2d4710c6536fd of table 'poc_rgpd_kudu_db.xxxxxxne_adresse_snappy' is unavailable: 3 replica(s) not RUNNING
  36386227a1624b74895dd1fb6b3150e9: TS unavailable [LEADER]
  3920550eeade417885b846064ddd2410: TS unavailable
  ea71c709fef34e0d87adfe90f917abc8: TS unavailable

Tablet c0ce4bfaf2a345d3944180e0168bfc84 of table 'poc_rgpd_kudu_db.xxxxxxtion_snt' is unavailable: 3 replica(s) not RUNNING
  ea71c709fef34e0d87adfe90f917abc8: TS unavailable
  041b5a3e1438484fbf3a68b10d91a928: TS unavailable [LEADER]
  fa697a7fc04d4c62ae031c77db71be9b: TS unavailable

Table impala::s4do05k0_p24.ec_donneesss has 24 unavailable tablet(s)

Table Summary
                       Name                       |   Status    | Total Tablets | Healthy | Under-replicated | Unavailable
--------------------------------------------------+-------------+---------------+---------+------------------+-------------
 impala::poc_rgpd_kudu_db.xxxxxxtion              | UNAVAILABLE | 10            | 0       | 0                | 10
 impala::poc_rgpd_kudu_db.xxxxxxne_adresse        | UNAVAILABLE | 20            | 0       | 0                | 20
 impala::poc_rgpd_kudu_db.xxxxxxne_adresse_ctas   | UNAVAILABLE | 20            | 0       | 0                | 20
 impala::poc_rgpd_kudu_db.xxxxxxne_adresse_snappy | UNAVAILABLE | 20            | 0       | 0                | 20
 impala::poc_rgpd_kudu_db.xxxxxxtion_snt          | UNAVAILABLE | 20            | 0       | 0                | 20
==================
Errors:
==================
error fetching info from tablet servers: Not found: No tablet servers found
table consistency check error: Corruption: 45 out of 45 table(s) are bad

FAILED
Runtime error: ksck discovered errors

4) Is Impala configured with the correct --kudu_master_hosts flag? It should be configured to talk to all of the masters. See https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_impala.html for documentation on that.
No, how can I configue  --kudu_master_hosts in cloudera manager, I don't find this setting ?

Thanks for your help

Best regards

avatar
Super Collaborator

> 3) Do you see any errors when you run the following command?

> sudo -u kudu kudu cluster ksck <master-addresses>

> See https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_administration_cli.html#ksck for documentation on running ksck.

=> yes a lot...

 

OK, you will need to take a look at the tserver logs to figure out what is going on. But it sounds like something is wrong with your tablet servers. Can you post any error messages you see in kudu-tserver.INFO logs?
 
> 4) Is Impala configured with the correct --kudu_master_hosts flag? It should be configured to talk to all of the

> masters. See https://www.cloudera.com/documentation/enterprise/5-13-x/topics/kudu_impala.html for

> documentation on that.

 

No, how can I configue  --kudu_master_hosts in cloudera manager, I don't find this setting ?

I just checked my dev cluster and you probably don't have to change anything; Cloudera Manager will automatically set it for Impala if you have a Kudu Service configured for it. I think your problem is with your Kudu tablet servers, not with Impala.

avatar
Explorer

I restarted a tserver and I've got these elements

 

stdout

 

Fri Sep 14 15:03:21 CEST 2018
JAVA_HOME=/logiciels/java/jdk
Using /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER as conf dir
Using scripts/kudu.sh as process script
CONF_DIR=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER
CMF_CONF_DIR=/etc/cloudera-scm-agent
Fri Sep 14 15:03:21 CEST 2018: KUDU_HOME: /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
Fri Sep 14 15:03:21 CEST 2018: CONF_DIR: /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER
Fri Sep 14 15:03:21 CEST 2018: CMD: tserver
Fri Sep 14 15:03:21 CEST 2018: Found master(s) on XXX1105.krj.gie

stderr

 

Fri Sep 14 15:03:21 CEST 2018
+ locate_java_home
+ locate_java_home_no_verify
+ JAVA6_HOME_CANDIDATES=('/usr/lib/j2sdk1.6-sun' '/usr/lib/jvm/java-6-sun' '/usr/lib/jvm/java-1.6.0-sun-1.6.0' '/usr/lib/jvm/j2sdk1.6-oracle' '/usr/lib/jvm/j2sdk1.6-oracle/jre' '/usr/java/jdk1.6' '/usr/java/jre1.6')
+ local JAVA6_HOME_CANDIDATES
+ OPENJAVA6_HOME_CANDIDATES=('/usr/lib/jvm/java-1.6.0-openjdk' '/usr/lib/jvm/jre-1.6.0-openjdk')
+ local OPENJAVA6_HOME_CANDIDATES
+ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')
+ local JAVA7_HOME_CANDIDATES
+ OPENJAVA7_HOME_CANDIDATES=('/usr/lib/jvm/java-1.7.0-openjdk' '/usr/lib/jvm/java-7-openjdk')
+ local OPENJAVA7_HOME_CANDIDATES
+ JAVA8_HOME_CANDIDATES=('/usr/java/jdk1.8' '/usr/java/jre1.8' '/usr/lib/jvm/j2sdk1.8-oracle' '/usr/lib/jvm/j2sdk1.8-oracle/jre' '/usr/lib/jvm/java-8-oracle')
+ local JAVA8_HOME_CANDIDATES
+ OPENJAVA8_HOME_CANDIDATES=('/usr/lib/jvm/java-1.8.0-openjdk' '/usr/lib/jvm/java-8-openjdk')
+ local OPENJAVA8_HOME_CANDIDATES
+ MISCJAVA_HOME_CANDIDATES=('/Library/Java/Home' '/usr/java/default' '/usr/lib/jvm/default-java' '/usr/lib/jvm/java-openjdk' '/usr/lib/jvm/jre-openjdk')
+ local MISCJAVA_HOME_CANDIDATES
+ case ${BIGTOP_JAVA_MAJOR} in
+ JAVA_HOME_CANDIDATES=(${JAVA7_HOME_CANDIDATES[@]} ${JAVA8_HOME_CANDIDATES[@]} ${JAVA6_HOME_CANDIDATES[@]} ${MISCJAVA_HOME_CANDIDATES[@]} ${OPENJAVA7_HOME_CANDIDATES[@]} ${OPENJAVA8_HOME_CANDIDATES[@]} ${OPENJAVA6_HOME_CANDIDATES[@]})
+ '[' -z /logiciels/java/jdk ']'
+ verify_java_home
+ '[' -z /logiciels/java/jdk ']'
+ echo JAVA_HOME=/logiciels/java/jdk
+ '[' -n '' ']'
+ source_parcel_environment
+ '[' '!' -z /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/meta/cdh_env.sh ']'
+ OLD_IFS=' 	
'
+ IFS=:
+ SCRIPT_ARRAY=($SCM_DEFINES_SCRIPTS)
+ DIRNAME_ARRAY=($PARCEL_DIRNAMES)
+ IFS=' 	
'
+ COUNT=1
++ seq 1 1
+ for i in '`seq 1 $COUNT`'
+ SCRIPT=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/meta/cdh_env.sh
+ PARCEL_DIRNAME=CDH-5.13.0-1.cdh5.13.0.p0.29
+ . /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/meta/cdh_env.sh
++ CDH_DIRNAME=CDH-5.13.0-1.cdh5.13.0.p0.29
++ export CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop
++ CDH_HADOOP_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop
++ export CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-0.20-mapreduce
++ CDH_MR1_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-0.20-mapreduce
++ export CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-hdfs
++ CDH_HDFS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-hdfs
++ export CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-httpfs
++ CDH_HTTPFS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-httpfs
++ export CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-mapreduce
++ CDH_MR2_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-mapreduce
++ export CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-yarn
++ CDH_YARN_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-yarn
++ export CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hbase
++ CDH_HBASE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hbase
++ export CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/zookeeper
++ CDH_ZOOKEEPER_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/zookeeper
++ export CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hive
++ CDH_HIVE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hive
++ export CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hue
++ CDH_HUE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hue
++ export CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/oozie
++ CDH_OOZIE_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/oozie
++ export CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop
++ CDH_HUE_PLUGINS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop
++ export CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/flume-ng
++ CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/flume-ng
++ export CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/pig
++ CDH_PIG_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/pig
++ export CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hive-hcatalog
++ CDH_HCAT_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hive-hcatalog
++ export CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/sqoop2
++ CDH_SQOOP2_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/sqoop2
++ export CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/llama
++ CDH_LLAMA_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/llama
++ export CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/sentry
++ CDH_SENTRY_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/sentry
++ export TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/bigtop-tomcat
++ TOMCAT_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/bigtop-tomcat
++ export JSVC_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/bigtop-utils
++ JSVC_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/bigtop-utils
++ export CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop/bin/hadoop
++ CDH_HADOOP_BIN=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop/bin/hadoop
++ export CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/impala
++ CDH_IMPALA_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/impala
++ export CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/solr
++ CDH_SOLR_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/solr
++ export CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hbase-solr
++ CDH_HBASE_INDEXER_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hbase-solr
++ export SEARCH_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/search
++ SEARCH_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/search
++ export CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/spark
++ CDH_SPARK_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/spark
++ export WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ WEBHCAT_DEFAULT_XML=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/etc/hive-webhcat/conf.dist/webhcat-default.xml
++ export CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-kms
++ CDH_KMS_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/hadoop-kms
++ export CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/parquet
++ CDH_PARQUET_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/parquet
++ export CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/avro
++ CDH_AVRO_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/avro
++ export CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
++ CDH_KUDU_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
+ echo 'Using /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER as conf dir'
+ echo 'Using scripts/kudu.sh as process script'
+ replace_conf_dir
+ echo CONF_DIR=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER
+ echo CMF_CONF_DIR=/etc/cloudera-scm-agent
+ EXCLUDE_CMF_FILES=('cloudera-config.sh' 'httpfs.sh' 'hue.sh' 'impala.sh' 'sqoop.sh' 'supervisor.conf' 'config.zip' 'proc.json' '*.log' '*.keytab' '*jceks')
++ printf '! -name %s ' cloudera-config.sh httpfs.sh hue.sh impala.sh sqoop.sh supervisor.conf config.zip proc.json '*.log' kudu.keytab creds.localjceks
+ find /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER -type f '!' -path '/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/logs/*' '!' -name cloudera-config.sh '!' -name httpfs.sh '!' -name hue.sh '!' -name impala.sh '!' -name sqoop.sh '!' -name supervisor.conf '!' -name config.zip '!' -name proc.json '!' -name '*.log' '!' -name kudu.keytab '!' -name creds.localjceks -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER#g' '{}' ';'
+ make_scripts_executable
+ find /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER -regex '.*\.\(py\|sh\)$' -exec chmod u+x '{}' ';'
+ RUN_DIR=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER
+ '[' '' == true ']'
+ chmod u+x /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/scripts/kudu.sh
+ export COMMON_SCRIPT=/usr/lib64/cmf/service/common/cloudera-config.sh
+ COMMON_SCRIPT=/usr/lib64/cmf/service/common/cloudera-config.sh
+ exec /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/scripts/kudu.sh tserver
+ date
Fri Sep 14 15:03:21 CEST 2018
+ DEFAULT_KUDU_HOME=/usr/lib/kudu
+ export KUDU_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
+ KUDU_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
+ export KUDU_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
+ KUDU_HOME=/opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
+ CMD=tserver
+ shift 2
+ log 'KUDU_HOME: /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu'
++ date
+ timestamp='Fri Sep 14 15:03:21 CEST 2018'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: KUDU_HOME: /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: KUDU_HOME: /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu'
Fri Sep 14 15:03:21 CEST 2018: KUDU_HOME: /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu
+ log 'CONF_DIR: /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER'
++ date
+ timestamp='Fri Sep 14 15:03:21 CEST 2018'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: CONF_DIR: /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: CONF_DIR: /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER'
Fri Sep 14 15:03:21 CEST 2018: CONF_DIR: /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER
+ log 'CMD: tserver'
++ date
+ timestamp='Fri Sep 14 15:03:21 CEST 2018'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: CMD: tserver'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: CMD: tserver'
Fri Sep 14 15:03:21 CEST 2018: CMD: tserver
+ GFLAG_FILE=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/gflagfile
+ '[' '!' -r /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/gflagfile ']'
+ MASTER_FILE=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/master.properties
+ '[' '!' -r /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/master.properties ']'
+ MASTER_IPS=
++ cat /run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/master.properties
+ for line in '$(cat "$MASTER_FILE")'
+ readconf XXX1105.krj.gie:server.address=
+ local conf
+ IFS=:
+ read host conf
+ IFS==
+ read key value
+ case $key in
+ '[' -n '' ']'
+ actual_value=XXX1105.krj.gie
+ '[' -n '' ']'
+ MASTER_IPS=XXX1105.krj.gie
+ log 'Found master(s) on XXX1105.krj.gie'
++ date
+ timestamp='Fri Sep 14 15:03:21 CEST 2018'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: Found master(s) on XXX1105.krj.gie'
+ echo 'Fri Sep 14 15:03:21 CEST 2018: Found master(s) on XXX1105.krj.gie'
Fri Sep 14 15:03:21 CEST 2018: Found master(s) on XXX1105.krj.gie
+ '[' false == true ']'
+ KUDU_ARGS=
+ '[' true == true ']'
+ KUDU_ARGS='              --rpc_authentication=required              --rpc_encryption=required              --keytab_file=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/kudu.keytab'
+ '[' tserver = master ']'
+ '[' tserver = tserver ']'
+ KUDU_ARGS='              --rpc_authentication=required              --rpc_encryption=required              --keytab_file=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/kudu.keytab --tserver_master_addrs=XXX1105.krj.gie'
+ exec /opt/cloudera/parcels/CDH-5.13.0-1.cdh5.13.0.p0.29/lib/kudu/sbin/kudu-tserver --rpc_authentication=required --rpc_encryption=required --keytab_file=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/kudu.keytab --tserver_master_addrs=XXX1105.krj.gie --flagfile=/run/cloudera-scm-agent/process/7996-kudu-KUDU_TSERVER/gflagfile

But, I don't have log file

 

[Errno 2] No such file or directory: '/logiciels/hadoop/log/kudu/kudu-tserver.INFO'

avatar
Super Collaborator
Can you access the role log files through Cloudera Manager?

avatar
Super Collaborator

Some more questions:

 

  1. When was the last time the cluster worked?
  2. What has changed since then?

avatar
Explorer
  1. When was the last time the cluster worke?
  2. What has changed since then?

 

I have this issue when I check this :

 

2018-09-17 10_50_25-Kudu - Cloudera Manager.png