Member since
06-16-2014
40
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4845 | 07-24-2014 08:36 PM |
10-09-2014
07:16 PM
There are also hue.ini files like /var/run/cloudera-scm-agent/process/XXXX-hue-BEESWAX_SERVER/hue.ini. Maybe these are the exact configure files for hue? But each time I restart the hue service, it will generate a new one such hue.ini file, and the changes in files like /opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/etc/hue/hue.ini have no effect on these files.
... View more
10-09-2014
06:43 PM
Thanks for reply. There are a number of hue.ini files in my system. I'm not sure which one is the configure file in use. /opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/share/hue/desktop/conf/hue.ini /opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/etc/hue/hue.ini /opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/share/hue/desktop/conf/hue.ini /opt/cloudera/parcels/CDH-4.7.0-1.cdh4.7.0.p0.40/etc/hue/hue.ini /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/share/hue/desktop/conf/hue.ini /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/etc/hue/hue.ini Is there any way I can figure out which one HiveServer is used by Beeswax now?
... View more
- Tags:
- there
10-09-2014
04:52 AM
Maybe I should give some details about my case. I'm now using CDH4.7. When I download the result of query in Beeswax, sometimes the hue service would crash. Therefore, I'm thinking about using HiveServer2 to access hive data through JDBC. However, I'm not sure that if starting HiveServer2 would has any unwanted effect on Beeswax. Since I know little about HiveServer2, so maybe some of my ideas about that is totally incorrect.
... View more
10-09-2014
04:40 AM
Hi, I have some question about HiveServer and Beeswax. As far as I known, Hive CLI access hive via HiveServer1 while Beeline though HiveServer2. I wonder how Beeswax access hive? Using HiveServer1 or HiveServer2? Is that HiveServer1 always runing? Or it only starts when Hive CLI launches? If I start HiveServer2 as a service, are there any conflicts between these two servers that would lead to the failure of Beeswax?
... View more
Labels:
- Labels:
-
Apache Hive
09-15-2014
02:34 AM
Thanks so much for resolving my long time confusion! I know that HAR can lead to smaller metadata, however, I still do not understand why HAR can save disk space. 8 1m size files would occupy 8 1m HDFS blocks, and the disk space used is 24m. HAR combines these files into a 8m har file occupying one 8m block, but the disk space used is still 24m. Or is any kind of compression used in HAR?
... View more
09-15-2014
01:32 AM
If I use HAR to archive these 8 files, would they be placed into one HDFS block (assuming that they are all less than 1m) ? If it is true, I can save 7/8 disk space in this case.
... View more
09-15-2014
01:25 AM
Thanks for your reply. The -ls command tells me the size of the file, but what I want to know is the occupied disk space. The jar file is 3922 bytes long, but it actually occupy one HDFS block (128M) according to your first anwser. Is it right? Is there any way I can check the actual occupied space?
... View more
09-14-2014
10:57 PM
The HDFS block size in my system is set to be 128m. Does it mean that if I put 8 files less than 128m to HDFS, they would occupy 3G disk space (replication factor = 3) ? When I use "hadoop fs -count ", it only show the size of files. How could I know the actual occupied space of HDFS file ? And how about I use HAR to archive these 8 files ? Can it save some space ?
... View more
Labels:
- Labels:
-
HDFS
08-17-2014
11:59 PM
Since upgrading CM needs more testing, I downgraded impala to 1.0 instead. Now using impala through impala-shell is ok. However, the impala query component in hue is still not working. It can recognize all the database created in hive, but always return "No server logs for this query" to all queries.
... View more
08-13-2014
10:02 PM
Ok, I see. By the way, my cluster is not connected with Internet, so do you have any advices about upgrading offline?
... View more
08-13-2014
08:25 PM
My CDH version is 4.5. The hive is healthy. Is it the /var/log/statestore/statestored.INFO the StateStore log? Here is the content of this file: Log file created at: 2014/08/13 10:10:50 Running on machine: bi-hadoop-m Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0813 10:10:50.259363 3245 init.cc:131] statestored version 1.3.1-cdh4 RELEASE (build 907481bf45b248a7bb3bb077d54831a71f484e5f) Built on Wed, 30 Apr 2014 14:38:46 PST I0813 10:10:50.273645 3245 init.cc:132] Using hostname: bi-hadoop-m I0813 10:10:50.274845 3245 logging.cc:100] Flags (see also /varz are on debug webserver): --catalog_service_port=26000 --load_catalog_in_background=true --num_metadata_loading_threads=16 --dump_ir=false --opt_module_output= --unopt_module_output= --abort_on_config_error=true --be_port=22000 --be_principal= --enable_process_lifetime_heap_profiling=false --heap_profile_dir= --hostname=bi-hadoop-m --keytab_file= --mem_limit=80% --principal= --log_filename=statestored --exchg_node_buffer_size_bytes=10485760 --max_row_batches=0 --enable_ldap_auth=false --kerberos_reinit_interval=60 --ldap_manual_config=false --ldap_tls=false --ldap_uri= --sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2 --rpc_cnxn_attempts=10 --rpc_cnxn_retry_interval_ms=2000 --insert_inherit_permissions=false --min_buffer_size=1024 --num_disks=0 --num_threads_per_disk=0 --read_size=8388608 --reuse_io_buffers=true --catalog_service_host=localhost --cgroup_hierarchy_path= --enable_rm=false --enable_webserver=true --llama_callback_port=28000 --llama_host=127.0.0.1 --llama_port=15000 --num_hdfs_worker_threads=16 --resource_broker_cnxn_attempts=10 --resource_broker_cnxn_retry_interval_ms=3000 --resource_broker_recv_timeout=0 --resource_broker_send_timeout=0 --staging_cgroup=impala_staging --state_store_host=localhost --state_store_subscriber_port=23000 --use_statestore=true --local_library_dir=/tmp --serialize_batch=false --status_report_interval=5 --num_threads_per_core=3 --queue_wait_timeout_ms=60000 --default_pool_max_queued=50 --default_pool_max_requests=20 --default_pool_mem_limit= --disable_pool_max_requests=false --disable_pool_mem_limits=false --fair_scheduler_allocation_path= --llama_site_path= --authorization_policy_file= --authorization_policy_provider_class=org.apache.sentry.provider.file.HadoopGroupResourceAuthorizationProvider --authorized_proxy_user_config= --load_catalog_at_startup=false --server_name= --abort_on_failed_audit_event=true --audit_event_log_dir= --be_service_threads=64 --beeswax_port=21000 --cancellation_thread_pool_size=5 --default_query_options= --fe_service_threads=64 --hs2_port=21050 --idle_query_timeout=0 --idle_session_timeout=0 --local_nodemanager_url= --log_mem_usage_interval=0 --log_query_to_file=true --max_audit_event_log_file_size=5000 --max_profile_log_file_size=5000 --max_result_cache_size=100000 --profile_log_dir= --query_log_size=25 --ssl_client_ca_certificate= --ssl_private_key= --ssl_server_certificate= --max_vcore_oversubscription_ratio=2.5 --rm_always_use_defaults=false --rm_default_cpu_vcores=2 --rm_default_memory=4G --disable_admission_control=true --require_username=false --statestore_subscriber_cnxn_attempts=10 --statestore_subscriber_cnxn_retry_interval_ms=3000 --statestore_subscriber_timeout_seconds=30 --state_store_port=24000 --statestore_heartbeat_frequency_ms=500 --statestore_max_missed_heartbeats=10 --statestore_num_heartbeat_threads=10 --statestore_suspect_heartbeats=5 --num_cores=0 --web_log_bytes=1048576 --non_impala_java_vlog=0 --periodic_counter_update_period_ms=500 --enable_webserver_doc_root=true --webserver_authentication_domain= --webserver_certificate_file= --webserver_doc_root=/opt/cloudera/parcels/IMPALA-1.3.1-1.impala1.3.1.p0.1172/lib/impala --webserver_interface= --webserver_password_file= --webserver_port=25010 --flagfile=/var/run/cloudera-scm-agent/process/1886-impala-STATESTORE/impala-conf/state_store_flags --fromenv= --tryfromenv= --undefok= --tab_completion_columns=80 --tab_completion_word= --help=false --helpfull=false --helpmatch= --helpon= --helppackage=false --helpshort=false --helpxml=false --version=false --alsologtoemail= --alsologtostderr=false --drop_log_memory=true --log_backtrace_at= --log_dir=/var/log/statestore --log_link= --log_prefix=true --logbuflevel=-1 --logbufsecs=30 --logbufvlevel=1 --logemaillevel=999 --logmailer=/bin/mail --logtostderr=false --max_log_size=200 --minloglevel=0 --stderrthreshold=2 --stop_logging_if_full_disk=false --symbolize_stacktrace=true --v=1 --vmodule= I0813 10:10:50.275037 3245 init.cc:137] Cpu Info: Model: Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz Cores: 8 L1 Cache: 32.00 KB L2 Cache: 256.00 KB L3 Cache: 10.00 MB Hardware Supports: ssse3 sse4_1 sse4_2 popcnt I0813 10:10:50.275087 3245 init.cc:138] Disk Info: Num disks 2: sda (rotational=true) dm- (rotational=true) I0813 10:10:50.275172 3245 init.cc:139] Physical Memory: 31.39 GB I0813 10:10:50.275213 3245 init.cc:140] OS version: Linux version 2.6.18-348.el5 (mockbuild@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-54)) #1 SMP Tue Jan 8 17:53:53 EST 2013 I0813 10:10:50.275233 3245 init.cc:141] Process ID: 3245 I0813 10:10:50.275317 3245 webserver.cc:155] Starting webserver on 0.0.0.0:25010 I0813 10:10:50.275346 3245 webserver.cc:169] Document root: /opt/cloudera/parcels/IMPALA-1.3.1-1.impala1.3.1.p0.1172/lib/impala I0813 10:10:50.275583 3245 webserver.cc:235] Webserver started I0813 10:10:50.281631 3245 thrift-server.cc:387] ThriftServer 'StatestoreService' started on port: 24000 I0813 10:10:52.497861 3382 statestore.cc:293] Creating new topic: ''catalog-update' on behalf of subscriber: 'bi-hadoop-sm:22000 I0813 10:10:52.497964 3382 statestore.cc:293] Creating new topic: ''impala-membership' on behalf of subscriber: 'bi-hadoop-sm:22000 I0813 10:10:52.497993 3382 statestore.cc:300] Registering: bi-hadoop-sm:22000 I0813 10:10:52.498136 3325 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-sm:23000 I0813 10:10:52.498955 3382 statestore.cc:331] Subscriber 'bi-hadoop-sm:22000' registered (registration id: d44b566c7d4f17e5:12d261558a25a1bf) I0813 10:10:53.000346 3325 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-sm:22000. Size = 106.00 B I0813 10:10:55.190691 3432 statestore.cc:300] Registering: bi-hadoop-s38:22000 I0813 10:10:55.190860 3432 statestore.cc:331] Subscriber 'bi-hadoop-s38:22000' registered (registration id: 404e299449525642:3919423961822d8f) I0813 10:10:55.191412 3332 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s38:22000. Size = 106.00 B I0813 10:10:55.191493 3332 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s38:23000 I0813 10:10:55.669672 3444 statestore.cc:300] Registering: bi-hadoop-s39:22000 I0813 10:10:55.669821 3444 statestore.cc:331] Subscriber 'bi-hadoop-s39:22000' registered (registration id: d049c9d286b0c0a5:90098d1a4e53bc81) I0813 10:10:55.691418 3326 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s39:22000. Size = 214.00 B I0813 10:10:55.691493 3326 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s39:23000 I0813 10:10:56.498275 3467 statestore.cc:300] Registering: bi-hadoop-m:22000 I0813 10:10:56.498389 3467 statestore.cc:331] Subscriber 'bi-hadoop-m:22000' registered (registration id: 2c47ab5e0b47ac69:bb1e5229cc792da1) I0813 10:10:56.498438 3333 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-m:22000. Size = 322.00 B I0813 10:10:56.498477 3333 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-m:23000 I0813 10:10:56.810374 3619 statestore.cc:300] Registering: bi-hadoop-s35:22000 I0813 10:10:56.810462 3619 statestore.cc:331] Subscriber 'bi-hadoop-s35:22000' registered (registration id: 3b4548042d00bf08:40a6116030a766a3) I0813 10:10:56.810511 3330 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s35:22000. Size = 426.00 B I0813 10:10:56.810546 3330 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s35:23000 I0813 10:10:57.740465 3683 statestore.cc:300] Registering: bi-hadoop-s33:22000 I0813 10:10:57.740603 3683 statestore.cc:331] Subscriber 'bi-hadoop-s33:22000' registered (registration id: 414c2f5002aaed82:8cab6c8740212088) I0813 10:10:57.740674 3331 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s33:22000. Size = 534.00 B I0813 10:10:57.740735 3331 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s33:23000 I0813 10:10:57.959748 3699 statestore.cc:300] Registering: bi-hadoop-s36:22000 I0813 10:10:57.959874 3699 statestore.cc:331] Subscriber 'bi-hadoop-s36:22000' registered (registration id: 4e49d7914f3c0ca9:eaeb5fccce2f6ebc) I0813 10:10:57.959945 3328 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s36:22000. Size = 642.00 B I0813 10:10:57.960011 3328 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s36:23000 I0813 10:10:58.872663 3707 statestore.cc:300] Registering: bi-hadoop-s34:22000 I0813 10:10:58.872807 3707 statestore.cc:331] Subscriber 'bi-hadoop-s34:22000' registered (registration id: 73486c443003bff4:c92cc94de366c85) I0813 10:10:58.872871 3327 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s34:22000. Size = 750.00 B I0813 10:10:58.872941 3327 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s34:23000 I0813 10:11:00.013226 3710 statestore.cc:300] Registering: bi-hadoop-s40:22000 I0813 10:11:00.013355 3710 statestore.cc:331] Subscriber 'bi-hadoop-s40:22000' registered (registration id: 84487823e24a4c01:e11f38a681d38a2) I0813 10:11:00.013402 3334 statestore.cc:458] Preparing initial impala-membership topic update for bi-hadoop-s40:22000. Size = 858.00 B I0813 10:11:00.013456 3334 client-cache.cc:94] CreateClient(): creating new client for bi-hadoop-s40:23000
... View more
08-13-2014
04:09 AM
I encounter such error when trying to use Impala: ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. However, Cloudera Manager shows that all Impala Daemons and Impala StateStore Daemon are healthy. How should I configure Impala to fix this error. My CM version is 4.5.2 and Impala version is impalad version 1.3.1-cdh4
... View more
- Tags:
- impala
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
07-24-2014
08:36 PM
Thanks for your patient help. I solve this problem finally. It's the problem of repository. There are both CDH4 and CDH5 repo files exist in the /etc/yum.repos.d. After I remove all CDH4 repo files, yum can find CDH5 version.
... View more
07-23-2014
10:04 PM
"/usr/lib/zookeeper" doesn't exist, so I think zookeeper has been completely uninstalled. All text from "yum info zookeeper" is shown, which means that the only available version of zookeeper is CDH4 for my yum. I think that maybe the problem. However, the CDH5 version of other components are available except zookeeper and solr.
... View more
07-23-2014
08:02 PM
It seems that it's the source of yum's problem. The output of "yum info hue" is ####### Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.btte.net * epel: mirrors.hust.edu.cn * extras: mirrors.btte.net * updates: mirrors.btte.net Available Packages Name : hue Arch : x86_64 Version : 3.6.0+cdh5.1.0+86 Release : 1.cdh5.1.0.p0.36.el5 Size : 2.3 k Repo : cloudera-cdh5 Summary : The hue metapackage URL : http://github.com/cloudera/hue License : ASL 2.0 Description: Hue is a browser-based desktop interface for interacting with Hadoop. : It supports a file browser, job tracker interface, cluster health monitor, and more. However "yum info zookeeper" is ############## Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.btte.net * epel: mirrors.hust.edu.cn * extras: mirrors.btte.net * updates: mirrors.btte.net Available Packages Name : zookeeper Arch : x86_64 Version : 3.4.5+26 Release : 1.cdh4.7.0.p0.17.el5 Size : 3.9 M Repo : cloudera-cdh4 Summary : A high-performance coordination service for distributed applications. URL : http://zookeeper.apache.org/ License : APL2 Description: ZooKeeper is a centralized service for maintaining configuration information, : naming, providing distributed synchronization, and providing group services. : All of these kinds of services are used in some form or another by distributed : applications. Each time they are implemented there is a lot of work that goes : into fixing the bugs and race conditions that are inevitable. Because of the : difficulty of implementing these kinds of services, applications initially : usually skimp on them ,which make them brittle in the presence of change and : difficult to manage. Even when done correctly, different implementations of these services lead to : management complexity when the applications are deployed.
... View more
07-23-2014
03:25 AM
Is there any way I can check whether I uninstall the components correctly after I uninstall the Cloudera Manager? For example the remaining files? By the way, is it possible that it is the installer's problem? Because my network is under some constraints. Is it possible that the installer use CDH4 package or source when the CDH5 cannot be download?
... View more
07-20-2014
07:01 PM
Here is related WARN info in cloudera-scm-server.log. ################### 2014-07-18 17:48:35,166 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:48:36,054 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:56:13,450 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:23,098 WARN [768616541@scm-web-21:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:23,790 WARN [768616541@scm-web-21:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:28,236 WARN [768616541@scm-web-21:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:46,904 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster} 2014-07-18 17:57:47,518 WARN [1836711262@scm-web-14:service.HostUtils@158] Host DbHost{id=3, hostId=1792acd5-c10f-4ae0-b46a-eb1fc1662102, hostName=master} (derived version MIXED) not eligible for cluster DbCluster{id=2, name=cluster}
... View more
07-18-2014
02:56 AM
I uninstall CM and CDH again following the guide I posted before, and then reinstall CM. However the error happened again. Here is the result of host inspector. The version of oozie is CDH5 now. ############################ component Version Release CDH Version Parquet 1.2.5+cdh5.1.0+130 1.cdh5.1.0.p0.26 CDH 5 Impala 1.4.0+cdh5.1.0+0 1.cdh5.1.0.p0.92 CDH 5 YARN 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 HDFS 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 hue-common 3.6.0+cdh5.1.0+86 1.cdh5.1.0.p0.36 CDH 5 Sqoop2 1.99.3+cdh5.1.0+26 1.cdh5.1.0.p0.22 CDH 5 MapReduce 2 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 HBase 0.98.1+cdh5.1.0+64 1.cdh5.1.0.p0.34 CDH 5 Sqoop 1.4.4+cdh5.1.0+55 1.cdh5.1.0.p0.24 CDH 5 Oozie 4.0.0+cdh5.1.0+249 1.cdh5.1.0.p0.28 CDH 5 Zookeeper 3.4.5+26 1.cdh4.7.0.p0.17 CDH 4 Hue 3.6.0+cdh5.1.0+86 1.cdh5.1.0.p0.36 CDH 5 spark 1.0.0+cdh5.1.0+41 1.cdh5.1.0.p0.27 CDH 5 MapReduce 1 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 Pig 0.12.0+cdh5.1.0+33 1.cdh5.1.0.p0.23 CDH 5 Crunch (CDH 5 only) 0.10.0+cdh5.1.0+14 1.cdh5.1.0.p0.25 CDH 5 Llama (CDH 5 only) 1.0.0+cdh5.1.0+0 1.cdh5.1.0.p0.25 CDH 5 HttpFS 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 Hadoop 2.3.0+cdh5.1.0+795 1.cdh5.1.0.p0.58 CDH 5 Hive 0.12.0+cdh5.1.0+369 1.cdh5.1.0.p0.39 CDH 5 HCatalog 0.12.0+cdh5.1.0+369 1.cdh5.1.0.p0.39 CDH 5 sentry 1.3.0+cdh5.1.0+155 1.cdh5.1.0.p0.59 CDH 5 Lily HBase Indexer 1.5+cdh5.1.0+12 1.cdh5.1.0.p0.41 CDH 5 Solr 4.4.0+181 1.cdh4.5.0.p0.14 CDH 4 Flume NG 1.5.0+cdh5.1.0+10 1.cdh5.1.0.p0.26 CDH 5 Cloudera Manager Management Daemons 5.1.0 1.cm510.p0.75 Not applicable Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) Unavailable Not applicable Java 7 java version "1.7.0_55" Java(TM) SE Runtime Environment (build 1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) Unavailable Not applicable
... View more
07-17-2014
03:45 AM
I uninstall CM and CDH following the guide http://cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Cloudera-Manager-Quick-Start/Cloudera-Manager-Quick-Start-Guide.html Then I reinstall CM. However, it still doesn't work.
... View more
07-16-2014
10:29 PM
Here is the result of Inspector. Should I reinstall the CDH? ###################################### Component Version Release CDH Version Parquet 1.2.5+cdh5.0.3+97 1.cdh5.0.3.p0.22 CDH 5 Impala 1.3.1+cdh5.0.3+0 1.cdh5.0.3.p0.24 CDH 5 YARN 2.3.0+cdh5.0.3+597 1.cdh5.0.3.p0.30 CDH 5 HDFS 2.3.0+cdh5.0.3+597 1.cdh5.0.3.p0.30 CDH 5 hue-common 3.5.0+cdh5.0.3+381 1.cdh5.0.3.p0.18 CDH 5 Sqoop2 1.99.3+cdh5.0.3+32 1.cdh5.0.3.p0.14 CDH 5 MapReduce 2 2.3.0+cdh5.0.3+597 1.cdh5.0.3.p0.30 CDH 5 HBase 0.96.1.1+cdh5.0.3+78 1.cdh5.0.3.p0.22 CDH 5 Sqoop 1.4.4+cdh5.0.3+50 1.cdh5.0.3.p0.14 CDH 5 Oozie 4.0.0+cdh5.0.3+185 1.cdh5.0.3.p0.17 CDH 5 Zookeeper 3.4.5+26 1.cdh4.7.0.p0.17 CDH 4 Hue 3.5.0+cdh5.0.3+381 1.cdh5.0.3.p0.18 CDH 5 Lily HBase Indexer 1.3+26 1.cdh4.5.0.p0.14 CDH 4 MapReduce 1 2.3.0+cdh5.0.3+597 1.cdh5.0.3.p0.30 CDH 5 Pig 0.12.0+cdh5.0.3+36 1.cdh5.0.3.p0.18 CDH 5 Crunch (CDH 5 only) 0.9.0+cdh5.0.3+27 1.cdh5.0.3.p0.20 CDH 5 Llama (CDH 5 only) 1.0.0+cdh5.0.3+0 1.cdh5.0.3.p0.23 CDH 5 HttpFS 2.3.0+cdh5.0.3+597 1.cdh5.0.3.p0.30 CDH 5 Hadoop 2.3.0+cdh5.0.3+597 1.cdh5.0.3.p0.30 CDH 5 Hive 0.12.0+cdh5.0.3+322 1.cdh5.0.3.p0.22 CDH 5 HCatalog 0.12.0+cdh5.0.3+322 1.cdh5.0.3.p0.22 CDH 5 spark 0.9.0+cdh5.0.3+38 1.cdh5.0.3.p0.21 CDH 5 Solr 4.4.0+181 1.cdh4.5.0.p0.14 CDH 4 Flume NG 1.4.0+98 1.cdh4.7.0.p0.17 CDH 4 Cloudera Manager Management Daemons 5.0.2 1.cm502.p0.297 Not applicable Java 6 java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b04) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) Unavailable Not applicable Java 7 java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Unavailable Not applicable Cloudera Manager Agent 5.0.2 1.cm502.p0.297.el5 Not applicable
... View more
07-16-2014
07:21 PM
I'm installing Cloudera Manager 5.02 and encounter errors when installing services. #########Error Info############# A server error has occurred. Send the following information to Cloudera. Path: http://192.168.28.40:7180/cmf/clusters/1/express-add-services/review Version: Cloudera Express 5.0.2 (#297 built by jenkins on 20140606-2221 git: 80907df78ba6b50c21a598f0caff8b00685d5961) java.lang.IllegalArgumentException:Host 093bced7-b29f-4716-8a60-8f209ac4d8d6 may not join service 'oozie': Host software version incompatible with service version at OperationsManagerImpl.java line 760 in com.cloudera.server.cmf.components.OperationsManagerImpl createRole() Stack Trace: OperationsManagerImpl.java line 760 in com.cloudera.server.cmf.components.OperationsManagerImpl createRole() RulesCluster.java line 586 in com.cloudera.server.cmf.cluster.RulesCluster createAndConfigureServices() ExpressAddServicesWizardController.java line 585 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController buildCustomCluster() ExpressAddServicesWizardController.java line 521 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController buildCluster() ExpressAddServicesWizardController.java line 492 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController buildAccs() ExpressAddServicesWizardController.java line 433 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController handleReview() ExpressAddServicesWizardController.java line 405 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController renderReviewStep() <generated> line -1 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController$$FastClassByCGLIB$$71adc282 invoke() MethodProxy.java line 191 in net.sf.cglib.proxy.MethodProxy invoke() Cglib2AopProxy.java line 688 in org.springframework.aop.framework.Cglib2AopProxy$CglibMethodInvocation invokeJoinpoint() ReflectiveMethodInvocation.java line 150 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed() MethodSecurityInterceptor.java line 61 in org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor invoke() ReflectiveMethodInvocation.java line 172 in org.springframework.aop.framework.ReflectiveMethodInvocation proceed() Cglib2AopProxy.java line 621 in org.springframework.aop.framework.Cglib2AopProxy$DynamicAdvisedInterceptor intercept() <generated> line -1 in com.cloudera.server.web.cmf.wizard.express.ExpressAddServicesWizardController$$EnhancerByCGLIB$$26be3314 renderReviewStep() NativeMethodAccessorImpl.java line -2 in sun.reflect.NativeMethodAccessorImpl invoke0() NativeMethodAccessorImpl.java line 57 in sun.reflect.NativeMethodAccessorImpl invoke() DelegatingMethodAccessorImpl.java line 43 in sun.reflect.DelegatingMethodAccessorImpl invoke() Method.java line 606 in java.lang.reflect.Method invoke() HandlerMethodInvoker.java line 176 in org.springframework.web.bind.annotation.support.HandlerMethodInvoker invokeHandlerMethod() AnnotationMethodHandlerAdapter.java line 436 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter invokeHandlerMethod() AnnotationMethodHandlerAdapter.java line 424 in org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter handle() DispatcherServlet.java line 790 in org.springframework.web.servlet.DispatcherServlet doDispatch() DispatcherServlet.java line 719 in org.springframework.web.servlet.DispatcherServlet doService() FrameworkServlet.java line 669 in org.springframework.web.servlet.FrameworkServlet processRequest() FrameworkServlet.java line 574 in org.springframework.web.servlet.FrameworkServlet doGet() HttpServlet.java line 575 in javax.servlet.http.HttpServlet service() HttpServlet.java line 668 in javax.servlet.http.HttpServlet service() ServletHolder.java line 511 in org.mortbay.jetty.servlet.ServletHolder handle() ServletHandler.java line 1221 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() UserAgentFilter.java line 78 in org.mortbay.servlet.UserAgentFilter doFilter() GzipFilter.java line 131 in org.mortbay.servlet.GzipFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() JAMonServletFilter.java line 48 in com.jamonapi.http.JAMonServletFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() JavaMelodyFacade.java line 109 in com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() FilterChainProxy.java line 311 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() FilterSecurityInterceptor.java line 116 in org.springframework.security.web.access.intercept.FilterSecurityInterceptor invoke() FilterSecurityInterceptor.java line 83 in org.springframework.security.web.access.intercept.FilterSecurityInterceptor doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() ExceptionTranslationFilter.java line 113 in org.springframework.security.web.access.ExceptionTranslationFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() SessionManagementFilter.java line 101 in org.springframework.security.web.session.SessionManagementFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() AnonymousAuthenticationFilter.java line 113 in org.springframework.security.web.authentication.AnonymousAuthenticationFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() RememberMeAuthenticationFilter.java line 146 in org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() SecurityContextHolderAwareRequestFilter.java line 54 in org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() RequestCacheAwareFilter.java line 45 in org.springframework.security.web.savedrequest.RequestCacheAwareFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() AbstractAuthenticationProcessingFilter.java line 182 in org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() LogoutFilter.java line 105 in org.springframework.security.web.authentication.logout.LogoutFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() SecurityContextPersistenceFilter.java line 87 in org.springframework.security.web.context.SecurityContextPersistenceFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() ConcurrentSessionFilter.java line 125 in org.springframework.security.web.session.ConcurrentSessionFilter doFilter() FilterChainProxy.java line 323 in org.springframework.security.web.FilterChainProxy$VirtualFilterChain doFilter() FilterChainProxy.java line 173 in org.springframework.security.web.FilterChainProxy doFilter() DelegatingFilterProxy.java line 237 in org.springframework.web.filter.DelegatingFilterProxy invokeDelegate() DelegatingFilterProxy.java line 167 in org.springframework.web.filter.DelegatingFilterProxy doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() CharacterEncodingFilter.java line 88 in org.springframework.web.filter.CharacterEncodingFilter doFilterInternal() OncePerRequestFilter.java line 76 in org.springframework.web.filter.OncePerRequestFilter doFilter() ServletHandler.java line 1212 in org.mortbay.jetty.servlet.ServletHandler$CachedChain doFilter() ServletHandler.java line 399 in org.mortbay.jetty.servlet.ServletHandler handle() SecurityHandler.java line 216 in org.mortbay.jetty.security.SecurityHandler handle() SessionHandler.java line 182 in org.mortbay.jetty.servlet.SessionHandler handle() SecurityHandler.java line 216 in org.mortbay.jetty.security.SecurityHandler handle() ContextHandler.java line 766 in org.mortbay.jetty.handler.ContextHandler handle() WebAppContext.java line 450 in org.mortbay.jetty.webapp.WebAppContext handle() HandlerWrapper.java line 152 in org.mortbay.jetty.handler.HandlerWrapper handle() StatisticsHandler.java line 53 in org.mortbay.jetty.handler.StatisticsHandler handle() HandlerWrapper.java line 152 in org.mortbay.jetty.handler.HandlerWrapper handle() Server.java line 326 in org.mortbay.jetty.Server handle() HttpConnection.java line 542 in org.mortbay.jetty.HttpConnection handleRequest() HttpConnection.java line 928 in org.mortbay.jetty.HttpConnection$RequestHandler headerComplete() HttpParser.java line 549 in org.mortbay.jetty.HttpParser parseNext() HttpParser.java line 212 in org.mortbay.jetty.HttpParser parseAvailable() HttpConnection.java line 404 in org.mortbay.jetty.HttpConnection handle() SelectChannelEndPoint.java line 410 in org.mortbay.io.nio.SelectChannelEndPoint run() QueuedThreadPool.java line 582 in org.mortbay.thread.QueuedThreadPool$PoolThread run()
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Manager
-
Security
07-14-2014
11:18 PM
Yes, I'm using Cloudera Manager. When I say "hue", actually I mean Beeswax in hue. Sorry for my inappropriate expression.
... View more
07-14-2014
10:35 PM
The parameters in "/etc/hive/conf/hive-site.xml" works in hive where I use CLI, but has nothing to do with hue. Does hue has its own configure files?
... View more
07-14-2014
10:32 PM
It's hue 2.5.0. I tried this in a one-machine test system, and I even restarted the hue service. It just doesn't work.
... View more
07-14-2014
07:53 PM
I want to set "hive.mapred.mode" as "strict" for hue. In the file "/etc/hive/conf/hive-site.xml", I can set hive parameters. However, it seems that this file cannot affect hue. Where can I change hue's default setting?
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Hue
07-13-2014
08:21 PM
For native hive, there is a .hiverc file which we can use to set hive parameters easily. Is there any way I can do the same things in the cloudera version?
... View more
Labels:
- Labels:
-
Apache Hive
06-20-2014
06:59 AM
1 Kudo
@smark, thanks so much for your suggestion! After adjusting those parameters and restart Cloudera Manager service, the size of that directory comes to 1G.
... View more
06-20-2014
03:23 AM
Finally I found the solution in cloudera docunment !!! I run "psql -U smon -p7432" and enter the console. In the large-size directory /var/lib/cloudera-scm-server-db/data/base/16387, there are 872 files. The largest files are 24M, and there are about 150 such files. Here are relname of these oids. relname | oid --------------------------+------- cmon_ll_dp_2014_06_20_11 | 37437 cmon_ll_dp_2014_06_20_10 | 37415 cmon_ll_dp_2014_06_20_09 | 37393 Any idea about what it is? Thanks so much for your help !
... View more