Member since
04-21-2015
49
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5309 | 08-24-2015 04:51 AM |
12-08-2017
04:57 AM
Hello csguna I just checked again, it's healthy ...Status: HEALTHY
Total size: 5888983039200 B (Total open files size: 9932111872 B)
Total dirs: 671975
Total files: 1376803
Total symlinks: 0 (Files currently being written: 1)
Total blocks (validated): 765441 (avg. block size 7693581 B) (Total open file blocks (not validated): 74)
Minimally replicated blocks: 765441 (99.99999 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.9998524
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Fri Dec 08 14:55:28 EET 2017 in 59696 milliseconds
The filesystem under path '/' is HEALTHY Thanks for your reply
... View more
12-04-2017
12:09 AM
Hello all
I have 2 tables, first one is CSV file and second one is parquet table. I am selecting and inserting data from first one to second one.
I am inserting data with following sql
insert overwrite header partition (p_tar = 2017112702) select FL_SEQ, REC_SEQ, EVNT_TYP, EVNT_SUB_TYP, Z1, Z2, K12, K24, K44, K45, K49, K53, K54, K66, K67, K68, K91, K92, K93, K94 from sgn.scsv where substr(cast (z2 as string), 1,10)="2017112702";
SQL statement inserted many records but I am getting following error 27 times
WARNINGS: Error converting column: 1 TO INT (Data is: ▒'▒@▒▒)
file: hdfs://nahadoop01:8020/grid1/hive/warehouse/sgn.db/scsv/20171127_02.csv
record: @[u▒;▒'▒@▒▒;;;;;;;;;;;;;;;;;;;;;;;;;
CSV table's first 6 columns like that
Query: describe sgn.scsv
+--------------+-----------+---------+
| name | type | comment |
+--------------+-----------+---------+
| fl_dt | int | |
| fl_seq | int | |
| rec_seq | int | |
| evnt_typ | int | |
| evnt_sub_typ | int | |
| z1 | bigint | |
And parquet table is similar to CSV table
Query: describe header
+------------------+-----------+
| name | type |
+------------------+-----------+
| fl_seq | int |
| rec_seq | int |
| evnt_typ | int |
| evnt_sub_typ | int |
| strt_ts | bigint |
| end_ts | bigint |
I downloaded /grid1/hive/warehouse/sgn.db/scsv/20171127_02.csv file to Linux filesystem and then checked the file contents
There is only one line for error and it's not in first columns
impala@hadoop01:$ grep -n "@\[u" 20171127_02.csv
59404785:20171127;2189;1575;10;220;20171126225552922;20171126225706058;;;;;;;;;;;;;;;;;104;28;31;161;;;;;;;;;;;;;;;;;;;80;47457;;80;60006;;;;;;;;;;;;;;;;158313;610798;2531;339864;802705;966917;4;73075;73136164;7@[u▒;▒'▒@▒▒;;;;;;;;;;;;;;;;;;;;;;;;;32;864;1546;0;;;;Mozilla/5.▒▒oYE=D"▒F▒▒▒
So I am wondering why I am getting same error 27 times if I have one line in csv file ? and more important question is why I am getting this error for "column 1" even column 1 is already has valid data ?
Do you have any idea ?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
HDFS
07-25-2017
05:27 AM
Hello I am using impalad version 2.2.0-cdh5.4.5 community edition. I have a problem that intermittently occurs. Some commands' execution takes too long time. The commands are physical database management commands not select, insert commands. show create table <table name> (it takes more than 15 mins today, so I terminated with Ctrl+C) create table <table name>, create table <table name> like <other table name> drop table <table name> (even empty table's drop operation may takes more than 20 mins today, so I had to terminate it too) There is no error while command executing. Following lines show today's WARNING and ERROR entries in log files. I am not sure the lines are related or not impala@hdpdnode03:/var/log/impalad$ grep 0725 impalad.ERROR
impala@hdpdnode03:/var/log/impalad$ grep 0725 impalad.WARNING
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.15.142:60725 remote=/10.160.15.142:50010]
W0623 05:35:54.195564 58466 DFSInputStream.java:657] Failed to connect to /10.160.15.142:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.15.142:60725 remote=/10.160.15.142:50010]
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.15.142:60725 remote=/10.160.15.142:50010]
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.15.142:60725 remote=/10.160.15.142:50010]
W0624 17:34:47.654734 58470 DFSInputStream.java:657] Failed to connect to /10.160.15.142:50010 for block, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.15.142:60725 remote=/10.160.15.142:50010]
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.15.142:60725 remote=/10.160.15.142:50010]
W0725 00:03:12.909853 58553 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 01:03:15.475993 58517 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 02:03:31.348456 58544 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 03:03:24.569105 58506 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 04:03:17.290172 58533 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 05:03:10.648597 58511 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 06:03:37.333330 58555 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 07:03:17.709178 58515 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 08:03:11.393440 58544 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 09:03:14.093744 58506 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 10:03:20.556089 58523 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 11:03:23.019448 58533 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 12:03:12.509655 58519 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 13:03:14.731720 58509 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 14:03:21.690156 58502 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
W0725 15:03:16.251411 58525 HdfsScanNode.java:784] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
impala@hdpdnode03:/var/log/impalad$ Do you have any idea why I am waiting at this commands ? Thanks
... View more
Labels:
- Labels:
-
Apache Impala
06-26-2017
03:06 AM
I have many internal customers that they know only impala/hive and they are running lots of query at same time So I have to use it and therefore I want to use gzipped parquet
... View more
06-25-2017
07:56 AM
Hello I have csv files and I want to convert them to gzip compressed parquet tables. So I have information about what column must be what type of data and column names(headers) for CSV file. So I have created a parquet table according CSV field information. And then I tried convert data after set compression_codec=gzip command and used insert overwrite .... command. When I run the insert commands I got 8 times of following errors whole one day. ( I have to use masking operation for some data so I use *** for data masking) Error converting column: 15 TO BIGINT (Data is: numbers=lat Error converting column: 18 TO INT (Data is: *** Error converting column: 20 TO INT (Data is: ***) Error converting column: 29 TO INT (Data is: **) Error converting column: 35 TO BIGINT (Data is: Unspecified Error converting column: 49 TO INT (Data is: Unspecified) But inserted lots of records into parquet table as compressed When I check the source table and target parquet table, source table has 159183859 but target parquet table has 39328054. So I have 119855805 records are missing even I got 8 errors while conversion. This lost is very huge and I don't know why So I tried with csv --> partitioned csv convertion I got same results I mean same lost record numbers So I am looking for a reason why I lost this number of records. Lost is about %25 of data Do you have any idea/explanation why I lost this huge data? why I did not got any error other than 8 column conversion errors. Thanks for your helps Suluhan
... View more
Labels:
- Labels:
-
Apache Impala
05-09-2016
08:59 AM
1 Kudo
Hello So sorry for delayed update. invalidate metadata
invalidate metadata tablename and then
refresh tablename commands have solved my problem. Source parquet tables and gzipped target tables have same records in their partitions. I am still getting "split into multiple hdfs-block s problem" warnings but it looks like it does not any impact on my record count issue. BTW : The link that you provided is very good Thanks for your response
... View more
05-09-2016
07:49 AM
Hello I am so sorry but I could not find out change passsword section. How can I change my password ? Actually I also want to change my security question Thanks
... View more
04-11-2016
12:17 AM
Hello I am trying to import parquet tables from another Cloudera Impala implementation to my Cloudera Impala --> I am getting parquet tables via sftp --> I am copying all parquet files into proper impala table directory like /grid1/hive/warehouse/<database>/<importedTable> without any error/warning --> I am creating required partition structure with alter table <importedTable> add partition (..) without any error/warning --> I am applying refresh <importedTable> command without any error/warning --> I could see new partitions in (show partition <importedTable> command) without any error/warning --> I am applying above procedure for all tables --> When I tried to access records in the table I got following warning "WARNINGS: Parquet files should not be split into multiple hdfs-blocks" I am using gzip compression on my tables but imported tables have default settings. So I have another database with gzipped data. Therefore I am copying data from imported table to gzipped table with following command set compression_codec=gzip without any error/warning insert into <gzippedTable> partition (<part1=value1, part2=value2) select field1, field3, field4 ...... from <importedTable> where <partitioned column1=value1, partitioned column2=value2) without any error/warning When I compare record counts for the partition both gzippedtable and imported table, there is a differences like following output [host03:21000] > select count (*) from importedTable where logdate=20160401; Query: select count (*) from importedTable where logdate=20160401 +-----------+ | count(*) | +-----------+ | 101565867 | +-----------+ WARNINGS: Parquet files should not be split into multiple hdfs-blocks. file=hdfs://host01:8020/grid1/hive/warehouse/<database>/importedTable/partitionedColumn=value1/logdate=20160401/51464233716089fd-295e6694028850a0_1358598818_data.0.parq (1 of 94 similar) Fetched 1 row(s) in 0.96s [host03:21000] > select count (*) from gzippedTable where logdate=20160401; Query: select count (*) from gzippedTable where logdate=20160401 +-----------+ | count(*) | +-----------+ | 123736525 | +-----------+ Fetched 1 row(s) in 0.92s So how can I fix "WARNINGS: Parquet files should not be split into multiple hdfs-blocks" and why I am getting different record counts after applying above procedure. Is record count differences related with multiple hdfs-blocks warning ? Thanks
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
HDFS
01-14-2016
12:20 AM
Hello My colleague runs big SQLs through HUE interface on Impala (he knows only sql 🙂 ). Sometimes we get following error on hue interface. Does this error related with impala ? If yes, is there any way to fix that ? AnalysisException: Exceeded the maximum number of child expressions (10000). Expression has 12061 children: CASE WHEN (longitude BETWEEN 28.4360479702 AND 28.4480394711) AND (latitude BETW... Above error occured while he was running about 12000 lines of SQL 🙂 Best regards
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Hue
10-20-2015
09:29 AM
Hello Is there any reason to not use JDBC ? I could connect from SAP BO to Impala with 32 Bit Impala JDBC drivers Thanks
... View more
09-21-2015
10:49 PM
Hello I fixed db connection problem What kind of error did you get ? Did you check the logs ? Regards
... View more
08-24-2015
04:51 AM
Solved I don't understant it's exact solution or not but I could successfully deployed service configuration to clients after fixing Hive Metastore Canary database connection problem Thanks
... View more
08-24-2015
01:02 AM
Hello I have 3 nodes of Cloudera cluster. All nodes have following version of CM components cloudera-manager-agent.x86_64 cloudera-manager-daemons.x86_64 cloudera-manager-server.x86_64 Version : 5.4.3 Release : 1.cm543.p0.258.el6 I always get same error for every CM Cluster service When I try to deploy client configuration I get following errors Failed to execute command Deploy Client Configuration on service HDFS Completed only 0/3 steps. First failure: Client configuration (id=7) on host hadoop1 (id=3) exited with 1 and expected 0. Thanks msuluhan
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
08-21-2015
02:34 AM
Hello I changed directory permission to 755 and then change ACL of it, then it's working now. I will restart whole environment and then check again Thanks for your inputs msuluhan
... View more
08-14-2015
05:48 AM
Hello I already tried it :S no change, same error I just changed owner of files in /var/lib/zookeper/version2 directory with zookeeper:zookeeper while permission of /var/lib/zookeper is 755. But I got same error Any other idea? Thanks
... View more
08-13-2015
12:04 PM
Hello I already disabled SELinux and also IPTables Folder's ACL like following lines [root@testos1 lib]# getfacl zookeeper/ # file: zookeeper/ # owner: zookeeper # group: zookeeper user::rwx group::rwx other::rwx By the way I also enabled HBase from CM and I got similar error for /var/log/hbase directory. I changed permission to 777 and then issue fixed (?) I could start HBase service through CM [root@testos1 lib]# getfacl hbase getfacl: hbase: No such file or directory [root@testos1 lib]# cd /var/log [root@testos1 log]# ll | grep hbase drwxrwxrwx. 4 hbase hbase 4096 Aug 13 16:05 hbase [root@testos1 log]# ll hbase total 280 drwx------ 2 cloudera-scm cloudera-scm 4096 Aug 13 16:04 audit -rw-r--r-- 1 cloudera-scm cloudera-scm 137248 Aug 13 21:39 hbase-cmf-hbase-MASTER-testos1.localdomain.log.out -rw-r--r-- 1 cloudera-scm cloudera-scm 138631 Aug 13 21:40 hbase-cmf-hbase-REGIONSERVER-testos1.localdomain.log.out drwxr-xr-x 2 cloudera-scm cloudera-scm 4096 Aug 13 16:04 stacks [root@testos1 log]# getfacl hbase # file: hbase # owner: hbase # group: hbase user::rwx group::rwx other::rwx I think I must not get this issues, why hbase and zookeeper daemons/services do not use their users to write their directories instead of cloudera-scm How can I fix that issue ? Best Regards Murat
... View more
08-13-2015
06:22 AM
Hello again zoo.cfg contents are in following lines [root@testos1 124-zookeeper-server]# more zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 jute.maxbuffer=4194304 dataDir=/var/lib/zookeeper dataLogDir=/var/lib/zookeeper clientPort=2181 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=40000 autopurge.purgeInterval=24 autopurge.snapRetainCount=5 server.1=testos1.localdomain:3181:4181 leaderServes=yes Normally I try to use CM for all changes I have VM versiopn of Cloudera. I just checked from VM and snapshot file's owner is zookeeper in the VM version I started the service through CM Thanks
... View more
08-13-2015
05:54 AM
1 Kudo
Hello First of all thanks for your input I changed directory permission with chmod -R 777 /var/lib/zookeeper and then tried again zookeeper has started But snapshot.0's owner is cloudera-scm Is this normal ? [root@testos1 lib]# cd zookeeper/version-2/ [root@testos1 version-2]# ll total 4 -rw-r--r-- 1 cloudera-scm cloudera-scm 296 Aug 13 15:40 snapshot.0 [root@testos1 version-2]# Regarding to your question there are lots of zoo.cfg file in the server, which one is current configuration file that I must share with you ? [root@testos1 version-2]# find / -name zoo.cfg -exec ls -l {} \; -rw-r--r--. 1 cloudera-scm cloudera-scm 518 Jun 25 06:11 /opt/cloudera/parcels/CDH-5.4.3-1.cdh5.4.3.p0.6/share/doc/solr-doc-4.10.3+cdh5.4.3+256/example/solr/zoo.cfg -rw-r--r--. 1 cloudera-scm cloudera-scm 518 Jun 25 06:11 /opt/cloudera/parcels/CDH-5.4.3-1.cdh5.4.3.p0.6/share/doc/solr-doc-4.10.3+cdh5.4.3+256/example/multicore/zoo.cfg -rw-r--r--. 1 cloudera-scm cloudera-scm 518 Jun 25 06:11 /opt/cloudera/parcels/CDH-5.4.3-1.cdh5.4.3.p0.6/etc/solr/conf.dist/zoo.cfg -rw-r--r--. 1 cloudera-scm cloudera-scm 1269 Jun 25 06:09 /opt/cloudera/parcels/CDH-5.4.3-1.cdh5.4.3.p0.6/etc/zookeeper/conf.dist/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:40 /var/run/cloudera-scm-agent/process/123-zookeeper-server/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:07 /var/run/cloudera-scm-agent/process/118-zookeeper-init/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:10 /var/run/cloudera-scm-agent/process/120-zookeeper-init/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:10 /var/run/cloudera-scm-agent/process/121-zookeeper-server/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:04 /var/run/cloudera-scm-agent/process/117-zookeeper-server/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:40 /var/run/cloudera-scm-agent/process/122-zookeeper-init/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:50 /var/run/cloudera-scm-agent/process/124-zookeeper-server/zoo.cfg -rw-r----- 1 cloudera-scm cloudera-scm 311 Aug 13 15:10 /var/run/cloudera-scm-agent/process/119-zookeeper-init/zoo.cfg Thanks
... View more
08-13-2015
03:36 AM
Hello zookeeper could not start because of the java.io.FileNotFoundException: /var/lib/zookeeper/version-2/snapshot.0 (Permission denied) error Normally zookeeper user could write in this directory [root@testos1 ~]# ll /var/lib | grep zoo drwxr-xr-x. 3 zookeeper zookeeper 4096 Aug 12 18:54 zookeeper You have new mail in /var/spool/mail/root [root@testos1 ~]# ll /var/lib/zookeeper/ total 4 drwxr-xr-x. 2 zookeeper zookeeper 4096 Aug 13 11:31 version-2 [root@testos1 ~]# ll /var/lib/zookeeper/version-2/ total 0 [root@testos1 ~]# My environment has 1 server and tried to install from parcels cloudera-manager-agent.x86_64 5.4.3-1.cm543.p0.258.el6 @/cloudera-manager-agent-5.4.3-1.cm543.p0.258.el6.x86_64 cloudera-manager-daemons.x86_64 5.4.3-1.cm543.p0.258.el6 @/cloudera-manager-daemons-5.4.3-1.cm543.p0.258.el6.x86_64 cloudera-manager-server.x86_64 5.4.3-1.cm543.p0.258.el6 @/cloudera-manager-server-5.4.3-1.cm543.p0.258.el6.x86_64 var/log/zookeeper/zookeeper-cmf-zookeeper-SERVER-testos1.localdomain.log file has following error lines Aug 13, 1:08:54.630 PM INFO org.apache.zookeeper.server.ZooKeeperServer Server environment:user.name=cloudera-scm Aug 13, 1:08:54.630 PM INFO org.apache.zookeeper.server.ZooKeeperServer Server environment:user.home=/home/cloudera-scm Aug 13, 1:08:54.630 PM INFO org.apache.zookeeper.server.ZooKeeperServer Server environment:user.dir=/var/run/cloudera-scm-agent/process/115-zookeeper-server Aug 13, 1:08:54.631 PM DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog Opening datadir:/var/lib/zookeeper snapDir:/var/lib/zookeeper Aug 13, 1:08:54.631 PM INFO org.apache.zookeeper.server.ZooKeeperServer tickTime set to 2000 Aug 13, 1:08:54.631 PM INFO org.apache.zookeeper.server.ZooKeeperServer minSessionTimeout set to 4000 Aug 13, 1:08:54.632 PM INFO org.apache.zookeeper.server.ZooKeeperServer maxSessionTimeout set to 60000 Aug 13, 1:08:54.654 PM INFO org.apache.zookeeper.server.NIOServerCnxnFactory binding to port 0.0.0.0/0.0.0.0:2181 Aug 13, 1:08:54.674 PM INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog Snapshotting: 0x0 to /var/lib/zookeeper/version-2/snapshot.0 Aug 13, 1:08:54.675 PM ERROR org.apache.zookeeper.server.ZooKeeperServer Severe unrecoverable error, exiting java.io.FileNotFoundException: /var/lib/zookeeper/version-2/snapshot.0 (Permission denied) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at java.io.FileOutputStream.<init>(FileOutputStream.java:162) at org.apache.zookeeper.server.persistence.FileSnap.serialize(FileSnap.java:225) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.save(FileTxnSnapLog.java:275) at org.apache.zookeeper.server.ZooKeeperServer.takeSnapshot(ZooKeeperServer.java:270) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:265) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:377) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:118) at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:91) at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:53) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:121) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79) Aug 13, 1:08:56.903 PM INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig Reading configuration from: /var/run/cloudera-scm-agent/process/115-zookeeper-server/zoo.cfg Aug 13, 1:08:56.918 PM ERROR org.apache.zookeeper.server.quorum.QuorumPeerConfig Invalid configuration, only one server specified (ignoring) Aug 13, 1:08:56.919 PM INFO org.apache.zookeeper.server.DatadirCleanupManager autopurge.snapRetainCount set to 5 Aug 13, 1:08:56.920 PM INFO org.apache.zookeeper.server.DatadirCleanupManager autopurge.purgeInterval set to 24 Aug 13, 1:08:56.920 PM WARN org.apache.zookeeper.server.quorum.QuorumPeerMain Either no config or no quorum defined in config, running in standalone mode
... View more
Labels:
- Labels:
-
Apache Zookeeper