Member since
06-16-2014
40
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9179 | 07-24-2014 08:36 PM |
08-13-2014
10:02 PM
Ok, I see. By the way, my cluster is not connected with Internet, so do you have any advices about upgrading offline?
... View more
08-13-2014
08:25 PM
My CDH version is 4.5. The hive is healthy. Is it the /var/log/statestore/statestored.INFO the StateStore log? Here is the content of this file: Log file created at: 2014/08/13 10:10:50 Running on machine: bi-hadoop-m Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0813 10:10:50.259363 3245 init.cc:131] statestored version 1.3.1-cdh4 RELEASE (build 907481bf45b248a7bb3bb077d54831a71f484e5f) Built on Wed, 30 Apr 2014 14:38:46 PST I0813 10:10:50.273645 3245 init.cc:132] Using hostname: bi-hadoop-m I0813 10:10:50.274845 3245 logging.cc:100] Flags (see also /varz are on debug webserver): --catalog_service_port=26000 --load_catalog_in_background=true --num_metadata_loading_threads=16 --dump_ir=false --opt_module_output= --unopt_module_output= --abort_on_config_error=true --be_port=22000 --be_principal= --enable_process_lifetime_heap_profiling=false --heap_profile_dir= --hostname=bi-hadoop-m --keytab_file= --mem_limit=80% --principal= --log_filename=statestored --exchg_node_buffer_size_bytes=10485760 --max_row_batches=0 --enable_ldap_auth=false --kerberos_reinit_interval=60 --ldap_manual_config=false --ldap_tls=false --ldap_uri= --sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2 --rpc_cnxn_attempts=10 --rpc_cnxn_retry_interval_ms=2000 --insert_inherit_permissions=false --min_buffer_size=1024 --num_disks=0 --num_threads_per_disk=0 --read_size=8388608 --reuse_io_buffers=true --catalog_service_host=localhost --cgroup_hierarchy_path= --enable_rm=false --enable_webserver=true --llama_callback_port=28000 --llama_host=127.0.0.1 --llama_port=15000 --num_hdfs_worker_threads=16 --resource_broker_cnxn_attempts=10 --resource_broker_cnxn_retry_interval_ms=3000 --resource_broker_recv_timeout=0 --resource_broker_send_timeout=0 --staging_cgroup=impala_staging --state_store_host=localhost --state_store_subscriber_port=23000 --use_statestore=true --local_library_dir=/tmp --serialize_batch=false --status_report_interval=5 --num_threads_per_core=3 --queue_wait_timeout_ms=60000 --default_pool_max_queued=50 --default_pool_max_requests=20 --default_pool_mem_limit= --disable_pool_max_requests=false --disable_pool_mem_limits=false --fair_scheduler_allocation_path= --llama_site_path= --authorization_policy_file= --authorization_policy_provider_class=org.apache.sentry.provider.file.HadoopGroupResourceAuthorizationProvider --authorized_proxy_user_config= --load_catalog_at_startup=false --server_name= --abort_on_failed_audit_event=true --audit_event_log_dir= --be_service_threads=64 --beeswax_port=21000 --cancellation_thread_pool_size=5 --default_query_options= --fe_service_threads=64 --hs2_port=21050 --idle_query_timeout=0 --idle_session_timeout=0 --local_nodemanager_url= --log_mem_usage_interval=0 --log_query_to_file=true --max_audit_event_log_file_size=5000 --max_profile_log_file_size=5000 --max_result_cache_size=100000 --profile_log_dir= --query_log_size=25 --ssl_client_ca_certificate= --ssl_private_key= --ssl_server_certificate= --max_vcore_oversubscription_ratio=2.5 --rm_always_use_defaults=false --rm_default_cpu_vcores=2 --rm_default_memory=4G --disable_admission_control=true --require_username=false --statestore_subscriber_cnxn_attempts=10 --statestore_subscriber_cnxn_retry_interval_ms=3000 --statestore_subscriber_timeout_seconds=30 --state_store_port=24000 --statestore_heartbeat_frequency_ms=500 --statestore_max_missed_heartbeats=10 --statestore_num_heartbeat_threads=10 --statestore_suspect_heartbeats=5 --num_cores=0 --web_log_bytes=1048576 --non_impala_java_vlog=0 --periodic_counter_update_period_ms=500 --enable_webserver_doc_root=true --webserver_authentication_domain= --webserver_certificate_file= --webserver_doc_root=/opt/cloudera/parcels/IMPALA-1.3.1-1.impala1.3.1.p0.1172/lib/impala --webserver_interface= --webserver_password_file= --webserver_port=25010 --flagfile=/var/run/cloudera-scm-agent/process/1886-impala-STATESTORE/impala-conf/state_store_flags --fromenv= --tryfromenv= --undefok= --tab_completion_columns=80 --tab_completion_word= --help=false --helpfull=false --helpmatch= --helpon= --helppackage=false --helpshort=false --helpxml=false --version=false --alsologtoemail= --alsologtostderr=false --drop_log_memory=true --log_backtrace_at= --log_dir=/var/log/statestore --log_link= --log_prefix=true --logbuflevel=-1 --logbufsecs=30 --logbufvlevel=1 --logemaillevel=999 --logmailer=/bin/mail --logtostderr=false --max_log_size=200 --minloglevel=0 --stderrthreshold=2 --stop_logging_if_full_disk=false --symbolize_stacktrace=true --v=1 --vmodule= I0813 10:10:50.275037 3245 init.cc:137] Cpu Info: Model: Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz Cores: 8 L1 Cache: 32.00 KB L2 Cache: 256.00 KB L3 Cache: 10.00 MB Hardware Supports: ssse3 sse4_1 sse4_2 popcnt I0813 10:10:50.275087 3245 init.cc:138] Disk Info: Num disks 2: sda (rotational=true) dm- (rotational=true) I0813 10:10:50.275172 3245 init.cc:139] Physical Memory: 31.39 GB I0813 10:10:50.275213 3245 init.cc:140] OS version: Linux version 2.6.18-348.el5 (mockbuild@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)) #1 SMP Tue Jan 8 17:53:53 EST 2013 I0813 10:10:50.275233 3245 init.cc:141] Process ID: 3245 I0813 10:10:50.275317 3245 webserver.cc:155] Starting webserver on 0.0.0.0:25010 I0813 10:10:50.275346 3245 webserver.cc:169] Document root: /opt/cloudera/parcels/IMPALA-1.3.1-1.impala1.3.1.p0.1172/lib/impala I0813 10:10:50.275583 3245 webserver.cc:235] Webserver started I0813 10:10:50.281631 3245 thrift-server.cc:387] ThriftServer 'StatestoreService' started on port: 24000 I0813 10:10:52.497861 3382 statestore.cc:293] Creating new topic: ''catalog-update' on behalf of subscriber: 'bi-hadoop-sm:22000 I0813 10:10:52.497964 3382 statestore.cc:293] Creating new topic: ''impala-membership' on behalf of subscriber: 'bi-hadoop-sm:22000 I0813 10:10:52.497993 3382 statestore.cc:300] Registering: bi-hadoop-sm:22000 I0813 10:10:52.498136 3325 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-sm:23000 I0813 10:10:52.498955 3382 statestore.cc:331] Subscriber 'bi-hadoop-sm:22000' registered (registration id: d44b566c7d4f17e5:12d261558a25a1bf) I0813 10:10:53.000346 3325 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-sm:22000. Size = 106.00 B I0813 10:10:55.190691 3432 statestore.cc:300] Registering: bi-hadoop-s38:22000 I0813 10:10:55.190860 3432 statestore.cc:331] Subscriber 'bi-hadoop-s38:22000' registered (registration id: 404e299449525642:3919423961822d8f) I0813 10:10:55.191412 3332 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s38:22000. Size = 106.00 B I0813 10:10:55.191493 3332 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s38:23000 I0813 10:10:55.669672 3444 statestore.cc:300] Registering: bi-hadoop-s39:22000 I0813 10:10:55.669821 3444 statestore.cc:331] Subscriber 'bi-hadoop-s39:22000' registered (registration id: d049c9d286b0c0a5:90098d1a4e53bc81) I0813 10:10:55.691418 3326 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s39:22000. Size = 214.00 B I0813 10:10:55.691493 3326 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s39:23000 I0813 10:10:56.498275 3467 statestore.cc:300] Registering: bi-hadoop-m:22000 I0813 10:10:56.498389 3467 statestore.cc:331] Subscriber 'bi-hadoop-m:22000' registered (registration id: 2c47ab5e0b47ac69:bb1e5229cc792da1) I0813 10:10:56.498438 3333 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-m:22000. Size = 322.00 B I0813 10:10:56.498477 3333 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-m:23000 I0813 10:10:56.810374 3619 statestore.cc:300] Registering: bi-hadoop-s35:22000 I0813 10:10:56.810462 3619 statestore.cc:331] Subscriber 'bi-hadoop-s35:22000' registered (registration id: 3b4548042d00bf08:40a6116030a766a3) I0813 10:10:56.810511 3330 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s35:22000. Size = 426.00 B I0813 10:10:56.810546 3330 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s35:23000 I0813 10:10:57.740465 3683 statestore.cc:300] Registering: bi-hadoop-s33:22000 I0813 10:10:57.740603 3683 statestore.cc:331] Subscriber 'bi-hadoop-s33:22000' registered (registration id: 414c2f5002aaed82:8cab6c8740212088) I0813 10:10:57.740674 3331 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s33:22000. Size = 534.00 B I0813 10:10:57.740735 3331 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s33:23000 I0813 10:10:57.959748 3699 statestore.cc:300] Registering: bi-hadoop-s36:22000 I0813 10:10:57.959874 3699 statestore.cc:331] Subscriber 'bi-hadoop-s36:22000' registered (registration id: 4e49d7914f3c0ca9:eaeb5fccce2f6ebc) I0813 10:10:57.959945 3328 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s36:22000. Size = 642.00 B I0813 10:10:57.960011 3328 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s36:23000 I0813 10:10:58.872663 3707 statestore.cc:300] Registering: bi-hadoop-s34:22000 I0813 10:10:58.872807 3707 statestore.cc:331] Subscriber 'bi-hadoop-s34:22000' registered (registration id: 73486c443003bff4:c92cc94de366c85) I0813 10:10:58.872871 3327 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s34:22000. Size = 750.00 B I0813 10:10:58.872941 3327 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s34:23000 I0813 10:11:00.013226 3710 statestore.cc:300] Registering: bi-hadoop-s40:22000 I0813 10:11:00.013355 3710 statestore.cc:331] Subscriber 'bi-hadoop-s40:22000' registered (registration id: 84487823e24a4c01:e11f38a681d38a2) I0813 10:11:00.013402 3334 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s40:22000. Size = 858.00 B I0813 10:11:00.013456 3334 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s40:23000
... View more
08-13-2014
04:09 AM
I encounter such error when trying to use Impala: ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. However, Cloudera Manager shows that all Impala Daemons and Impala StateStore Daemon are healthy. How should I configure Impala to fix this error. My CM version is 4.5.2 and Impala version is impalad version 1.3.1-cdh4
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
07-24-2014
08:36 PM
Thanks for your patient help. I solve this problem finally. It's the problem of repository. There are both CDH4 and CDH5 repo files exist in the /etc/yum.repos.d. After I remove all CDH4 repo files, yum can find CDH5 version.
... View more
07-23-2014
10:04 PM
"/usr/lib/zookeeper" doesn't exist, so I think zookeeper has been completely uninstalled. All text from "yum info zookeeper" is shown, which means that the only available version of zookeeper is CDH4 for my yum. I think that maybe the problem. However, the CDH5 version of other components are available except zookeeper and solr.
... View more
07-23-2014
08:02 PM
It seems that it's the source of yum's problem. The output of "yum info hue" is ####### Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.btte.net * epel: mirrors.hust.edu.cn * extras: mirrors.btte.net * updates: mirrors.btte.net Available Packages Name : hue Arch : x86_64 Version : 3.6.0+cdh5.1.0+86 Release : 1.cdh5.1.0.p0.36.el5 Size : 2.3 k Repo : cloudera-cdh5 Summary : The hue metapackage URL : http://github.com/cloudera/hue License : ASL 2.0 Description: Hue is a browser-based desktop interface for interacting with Hadoop. : It supports a file browser, job tracker interface, cluster health monitor, and more. However "yum info zookeeper" is ############## Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.btte.net * epel: mirrors.hust.edu.cn * extras: mirrors.btte.net * updates: mirrors.btte.net Available Packages Name : zookeeper Arch : x86_64 Version : 3.4.5+26 Release : 1.cdh4.7.0.p0.17.el5 Size : 3.9 M Repo : cloudera-cdh4 Summary : A high-performance coordination service for distributed applications. URL : http://zookeeper.apache.org/ License : APL2 Description: ZooKeeper is a centralized service for maintaining configuration information, : naming, providing distributed synchronization, and providing group services. : All of these kinds of services are used in some form or another by distributed : applications. Each time they are implemented there is a lot of work that goes : into fixing the bugs and race conditions that are inevitable. Because of the : difficulty of implementing these kinds of services, applications initially : usually skimp on them ,which make them brittle in the presence of change and : difficult to manage. Even when done correctly, different implementations of these services lead to : management complexity when the applications are deployed.
... View more
07-23-2014
03:25 AM
Is there any way I can check whether I uninstall the components correctly after I uninstall the Cloudera Manager? For example the remaining files? By the way, is it possible that it is the installer's problem? Because my network is under some constraints. Is it possible that the installer use CDH4 package or source when the CDH5 cannot be download?
... View more
07-20-2014
07:01 PM
Here is related WARN info in cloudera-scm-server.log. ################### 2014-07-18 17:48:35,166 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:48:36,054 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:56:13,450 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:23,098 WARN [768616541@scm-web-21:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:23,790 WARN [768616541@scm-web-21:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:28,236 WARN [768616541@scm-web-21:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:46,904 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:47,518 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster}
... View more
07-18-2014
02:56 AM
I uninstall CM and CDH again following the guide I posted before, and then reinstall CM. However the error happened again. Here is the result of host inspector. The version of oozie is CDH5 now. ############################ component Version Release CDH Version Parquet 1.2.5+cdh5.1.0+130 1.cdh5.1.0.p0.26 CDH 5 Impala 1.4.0+cdh5.1.0+0 1.cdh5.1.0.p0.92 CDH 5 YARN 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 HDFS 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 hue-common 3.6.0+cdh5.1.0+86 1.cdh5.1.0.p0.36 CDH 5 Sqoop2 1.99.3+cdh5.1.0+26 1.cdh5.1.0.p0.22 CDH 5 MapReduce 2 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 HBase 0.98.1+cdh5.1.0+64 1.cdh5.1.0.p0.34 CDH 5 Sqoop 1.4.4+cdh5.1.0+55 1.cdh5.1.0.p0.24 CDH 5 Oozie 4.0.0+cdh5.1.0+249 1.cdh5.1.0.p0.28 CDH 5 Zookeeper 3.4.5+26 1.cdh4.7.0.p0.17 CDH 4 Hue 3.6.0+cdh5.1.0+86 1.cdh5.1.0.p0.36 CDH 5 spark 1.0.0+cdh5.1.0+41 1.cdh5.1.0.p0.27 CDH 5 MapReduce 1 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 Pig 0.12.0+cdh5.1.0+33 1.cdh5.1.0.p0.23 CDH 5 Crunch (CDH 5 only) 0.10.0+cdh5.1.0+14 1.cdh5.1.0.p0.25 CDH 5 Llama (CDH 5 only) 1.0.0+cdh5.1.0+0 1.cdh5.1.0.p0.25 CDH 5 HttpFS 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 Hadoop 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 Hive 0.12.0+cdh5.1.0+369 1.cdh5.1.0.p0.39 CDH 5 HCatalog 0.12.0+cdh5.1.0+369 1.cdh5.1.0.p0.39 CDH 5 sentry 1.3.0+cdh5.1.0+155 1.cdh5.1.0.p0.59 CDH 5 Lily HBase Indexer 1.5+cdh5.1.0+12 1.cdh5.1.0.p0.41 CDH 5 Solr 4.4.0+181 1.cdh4.5.0.p0.14 CDH 4 Flume NG 1.5.0+cdh5.1.0+10 1.cdh5.1.0.p0.26 CDH 5 Cloudera Manager Management Daemons 5.1.0 1.cm510.p0.75 Not applicable Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) Unavailable Not applicable Java 7 java version "1.7.0_55" Java(TM) SE Runtime Environment (build 1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) Unavailable Not applicable
... View more
07-17-2014
03:45 AM
I uninstall CM and CDH following the guide http://cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Cloudera-Manager-Quick-Start/Cloudera-Manager-Quick-Start-Guide.html Then I reinstall CM. However, it still doesn't work.
... View more