Member since
11-06-2016
42
Posts
25
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4804 | 05-17-2017 01:38 PM | |
4452 | 02-07-2017 01:06 PM | |
1188 | 03-08-2016 07:25 PM |
07-06-2017
12:56 PM
Hue prevents execution of the above query { from Impala & Hive} . The alert on the right provides no message , neither do the logs {runcpserver.log, access.log or server.log} . Simpler query works on both impala & beeswax. Enabled " Enable Django Debug Mode" to capture logs in debug mode not much help . Restarted Hue but same error persits . I am using CDH 5.8.4 and Hue 3.9 that comes bundled with it . Tried using different browser but same issue shows up in the other broswer as well ( chrome & mozzilla) Below is the output of runcpserver.log [06/Jul/2017 13:26:26 ] settings INFO Welcome to Hue 3.9.0 [06/Jul/2017 13:26:27 ] settings DEBUG DESKTOP_DB_TEST_NAME SET: /opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/lib/hue/desktop/desktop-test.db [06/Jul/2017 13:26:27 ] settings DEBUG DESKTOP_DB_TEST_USER SET: hue_test [06/Jul/2017 12:26:28 -0700] __init__ WARNING Couldn't import snappy. Support for snappy compression disabled. [06/Jul/2017 12:26:29 -0700] settings INFO Welcome to Hue 3.9.0 [06/Jul/2017 12:26:29 -0700] appmanager DEBUG Loaded Desktop Libraries: aws, hadoop, liboauth, liboozie, libopenid, librdbms, libsaml, libsentry, libsolr, libzookeeper [06/Jul/2017 12:26:29 -0700] appmanager DEBUG Loaded Desktop Applications: about, beeswax, filebrowser, hbase, help, impala, jobbrowser, jobsub, metastore, oozie, pig, proxy, rdbms, search, security, spark, sqoop, useradmin, zookeeper, indexer, metadata, notebook [06/Jul/2017 12:26:29 -0700] settings DEBUG Installed Django modules: DesktopModule(aws: aws),DesktopModule(hadoop: hadoop),DesktopModule(liboauth: liboauth),DesktopModule(liboozie: liboozie),DesktopModule(libopenid: libopenid),DesktopModule(librdbms: librdbms),DesktopModule(libsaml: libsaml),DesktopModule(libsentry: libsentry),DesktopModule(libsolr: libsolr),DesktopModule(libzookeeper: libzookeeper),DesktopModule(Hue: desktop),DesktopModule(About: about),DesktopModule(Hive: beeswax),DesktopModule(File Browser: filebrowser),DesktopModule(HBase Browser: hbase),DesktopModule(Help: help),DesktopModule(Impala: impala),DesktopModule(Job Browser: jobbrowser),DesktopModule(Job Designer: jobsub),DesktopModule(Metastore Manager: metastore),DesktopModule(Oozie Editor/Dashboard: oozie),DesktopModule(Pig Editor: pig),DesktopModule(Proxy: proxy),DesktopModule(RDBMS UI: rdbms),DesktopModule(Solr Search: search),DesktopModule(Hadoop Security: security),DesktopModule(Spark: spark),DesktopModule(Sqoop: sqoop),DesktopModule(User Admin: useradmin),DesktopModule(ZooKeeper Browser: zookeeper),DesktopModule(Solr Indexer: indexer),DesktopModule(Metadata: metadata),DesktopModule(Notebook: notebook) [06/Jul/2017 12:26:29 -0700] settings DEBUG DESKTOP_DB_TEST_NAME SET: /opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/lib/hue/desktop/desktop-test.db [06/Jul/2017 12:26:29 -0700] settings DEBUG DESKTOP_DB_TEST_USER SET: hue_test [06/Jul/2017 12:26:31 -0700] __init__ WARNING Couldn't import snappy. Support for snappy compression disabled. [06/Jul/2017 12:26:39 -0700] middleware INFO Unloading SpnegoMiddleware [06/Jul/2017 12:26:39 -0700] middleware INFO Unloading HueRemoteUserMiddleware [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^accounts/login/$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^accounts/logout/$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^profile$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^login/oauth/?$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^login/oauth_authenticated/?$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^home$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^home2$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^logs$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^desktop/dump_config$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^desktop/download_logs$> [06/Jul/2017 12:26:39 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^desktop/get_debug_level> ...skipping... [06/Jul/2017 12:50:30 -0700] access DEBUG 192.168.248.46 -anon- - "GET /static/desktop/ext/css/fileuploader.css HTTP/1.1" [06/Jul/2017 12:50:30 -0700] access DEBUG 192.168.248.46 -anon- - "GET /static/desktop/css/hue3.css HTTP/1.1" [06/Jul/2017 12:50:30 -0700] access DEBUG 192.168.248.46 -anon- - "GET /static/desktop/ext/css/bootplus.css HTTP/1.1" [06/Jul/2017 12:50:30 -0700] access DEBUG 192.168.248.46 -anon- - "GET /static/desktop/ext/css/font-awesome.min.css HTTP/1.1" [06/Jul/2017 12:50:30 -0700] access DEBUG 192.168.248.46 -anon- - "GET /static/desktop/ext/chosen/chosen.min.css HTTP/1.1" [06/Jul/2017 12:50:30 -0700] access DEBUG 192.168.248.46 -anon- - "GET /static/desktop/css/login.css HTTP/1.1" [06/Jul/2017 12:50:35 -0700] access INFO 192.168.248.46 zaloni_admin - "POST /jobbrowser/jobs/ HTTP/1.1" [06/Jul/2017 12:50:35 -0700] connectionpool DEBUG "GET /ws/v1/cluster/apps?finalStatus=UNDEFINED&limit=1000&user=zaloni_admin&doAs=zaloni_admin HTTP/1.1" 200 None [06/Jul/2017 12:50:35 -0700] kerberos_ DEBUG handle_mutual_auth(): Handling: 200 [06/Jul/2017 12:50:35 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [06/Jul/2017 12:50:35 -0700] resource DEBUG GET Got response: {"apps":null} [06/Jul/2017 12:51:05 -0700] access INFO 192.168.248.46 zaloni_admin - "POST /jobbrowser/jobs/ HTTP/1.1" [06/Jul/2017 12:51:05 -0700] connectionpool DEBUG "GET /ws/v1/cluster/apps?finalStatus=UNDEFINED&limit=1000&user=zaloni_admin&doAs=zaloni_admin HTTP/1.1" 200 None [06/Jul/2017 12:51:05 -0700] kerberos_ DEBUG handle_mutual_auth(): Handling: 200 [06/Jul/2017 12:51:05 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [06/Jul/2017 12:51:05 -0700] resource DEBUG GET Got response: {"apps":null} [06/Jul/2017 12:51:14 -0700] access DEBUG 192.168.211.200 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" [06/Jul/2017 12:51:36 -0700] access INFO 192.168.248.46 zaloni_admin - "POST /jobbrowser/jobs/ HTTP/1.1" [06/Jul/2017 12:51:36 -0700] connectionpool DEBUG "GET /ws/v1/cluster/apps?finalStatus=UNDEFINED&limit=1000&user=zaloni_admin&doAs=zaloni_admin HTTP/1.1" 200 None [06/Jul/2017 12:51:36 -0700] kerberos_ DEBUG handle_mutual_auth(): Handling: 200 [06/Jul/2017 12:51:36 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [06/Jul/2017 12:51:36 -0700] resource DEBUG GET Got response: {"apps":null} [06/Jul/2017 12:52:06 -0700] access INFO 192.168.248.46 zaloni_admin - "POST /jobbrowser/jobs/ HTTP/1.1" [06/Jul/2017 12:52:06 -0700] connectionpool DEBUG "GET /ws/v1/cluster/apps?finalStatus=UNDEFINED&limit=1000&user=zaloni_admin&doAs=zaloni_admin HTTP/1.1" 200 None [06/Jul/2017 12:52:06 -0700] kerberos_ DEBUG handle_mutual_auth(): Handling: 200 [06/Jul/2017 12:52:06 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [06/Jul/2017 12:52:06 -0700] resource DEBUG GET Got response: {"apps":null} [06/Jul/2017 12:52:14 -0700] access DEBUG 192.168.211.200 -anon- - "HEAD /desktop/debug/is_alive HTTP/1.1" [06/Jul/2017 12:52:36 -0700] access INFO 192.168.248.46 zaloni_admin - "POST /jobbrowser/jobs/ HTTP/1.1" [06/Jul/2017 12:52:36 -0700] connectionpool DEBUG "GET /ws/v1/cluster/apps?finalStatus=UNDEFINED&limit=1000&user=zaloni_admin&doAs=zaloni_admin HTTP/1.1" 200 None [06/Jul/2017 12:52:36 -0700] kerberos_ DEBUG handle_mutual_auth(): Handling: 200 [06/Jul/2017 12:52:36 -0700] kerberos_ DEBUG handle_response(): returning <Response [200]> [06/Jul/2017 12:52:36 -0700] resource DEBUG GET Got response: {"apps":null}
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
Cloudera Hue
05-17-2017
01:38 PM
Got this issue fixed , looks like impala doesnot accest upper case hostnames once cluster is kerberized . changing the hostname from uppercase to lower case resolved this issue . Thanks...
... View more
04-20-2017
11:55 AM
Below is the log from Impala Catalog Server KRB5.CONF ========== > cat /etc/krb5.conf [libdefaults] default_realm = <DOMAIN-NAME> dns_lookup_kdc = true dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = rc4-hmac aes128-cts aes256-cts des-cbc-crc des-cbc-md5 default_tkt_enctypes = rc4-hmac aes128-cts aes256-cts des-cbc-crc des-cbc-md5 permitted_enctypes = rc4-hmac aes128-cts aes256-cts des-cbc-crc des-cbc-md5 udp_preference_limit = 1 kdc_timeout = 3000 [realms] <DOMAIN-NAME> = { kdc = <AD SERVER FQDN> admin_server = <AD SERVER FQDN> } KEYTAB ======= > klist -ket /var/run/cloudera-scm-agent/process/805-impala-CATALOGSERVER/impala.keytab Keytab name: FILE:/var/run/cloudera-scm-agent/process/805-impala-CATALOGSERVER/impala.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 04/14/17 03:17:07 impala/<HOSTNAME>@<DOMAIN> (arcfour-hmac) 1 04/14/17 03:17:08 impala/<HOSTNAME>@<DOMAIN> (aes128-cts-hmac-sha1-96) 1 04/14/17 03:17:08 impala/<HOSTNAME>@<DOMAIN> (aes256-cts-hmac-sha1-96) 1 04/14/17 03:17:08 impala/<HOSTNAME>@<DOMAIN> (des-cbc-crc) 1 04/14/17 03:17:08 impala/<HOSTNAME>@<DOMAIN> (des-cbc-md5) ROLE LOG ========= Time Log Level Source Log Message Apr 19, 10:06:47.589 AM INFO logging.cc:117 stdout will be logged to this file. Apr 19, 10:06:47.589 AM ERROR logging.cc:118 stderr will be logged to this file. Apr 19, 10:06:47.589 AM INFO minidump.cc:199 Setting minidump size limit to 20971520. Apr 19, 10:06:47.590 AM INFO atomicops-internals-x86.cc:93 vendor GenuineIntel family 6 model 15 sse2 1 cmpxchg16b 1 Apr 19, 10:06:47.599 AM INFO authentication.cc:678 Using internal kerberos principal "impala/<FQDN>@<DOMAIN-NAME>" Apr 19, 10:06:47.599 AM INFO authentication.cc:1013 Internal communication is authenticated with Kerberos Apr 19, 10:06:47.599 AM INFO authentication.cc:798 Waiting for Kerberos ticket for principal: impala/<FQDN>@<DOMAIN-NAME> Apr 19, 10:06:47.599 AM INFO authentication.cc:494 Registering impala/<FQDN>@<DOMAIN-NAME>, keytab file /var/run/cloudera-scm-agent/process/811-impala-CATALOGSERVER/impala.keytab Apr 19, 10:06:47.709 AM INFO authentication.cc:800 Kerberos ticket granted to impala/<FQDN>@<DOMAIN-NAME> Apr 19, 10:06:47.709 AM INFO authentication.cc:678 Using external kerberos principal "impala/<FQDN>@<DOMAIN-NAME>" Apr 19, 10:06:47.709 AM INFO authentication.cc:1029 External communication is authenticated with Kerberos Apr 19, 10:06:47.709 AM INFO init.cc:201 catalogd version 2.6.0-cdh5.8.4 RELEASE (build 207450616f75adbe082a4c2e1145a2384da83fa6) Built on Mon, 06 Feb 2017 14:33:04 PST Apr 19, 10:06:47.709 AM INFO init.cc:202 Using hostname: <hostname> Apr 19, 10:06:47.710 AM INFO logging.cc:153 Flags (see also /varz are on debug webserver): --catalog_service_port=26000 --load_catalog_in_background=true --num_metadata_loading_threads=16 --sentry_config=/var/run/cloudera-scm-agent/process/811-impala-CATALOGSERVER/impala-conf/sentry-site.xml --asm_module_dir= --disable_optimization_passes=false --dump_ir=false --opt_module_dir= --perf_map=false --print_llvm_ir_instruction_count=false --unopt_module_dir= --abort_on_config_error=true --be_port=22000 --be_principal= --compact_catalog_topic=false --disable_mem_pools=false --enable_accept_queue_server=true --enable_process_lifetime_heap_profiling=false --heap_profile_dir= --hostname=<hostname> --inc_stats_size_limit_bytes=209715200 --keytab_file=/var/run/cloudera-scm-agent/process/811-impala-CATALOGSERVER/impala.keytab --krb5_conf= --krb5_debug_file= --load_auth_to_local_rules=false --max_minidumps=9 --mem_limit=80% --minidump_path=/var/log/impala-minidumps/catalogd --minidump_size_limit_hint_kb=20480 --principal=impala/<FQDN>@<DOMAIN-NAME> --redaction_rules_file= --max_log_files=10 --pause_monitor_sleep_time_ms=500 --pause_monitor_warn_threshold_ms=10000 --log_filename=catalogd --redirect_stdout_stderr=true --data_source_batch_size=1024 --exchg_node_buffer_size_bytes=10485760 --enable_partitioned_aggregation=true --enable_partitioned_hash_join=true --enable_probe_side_filtering=true --enable_quadratic_probing=true --skip_lzo_version_check=false --convert_legacy_hive_parquet_utc_timestamps=false --max_page_header_size=8388608 --parquet_min_filter_reject_ratio=0.10000000000000001 --max_row_batches=0 --runtime_filter_wait_time_ms=1000 --suppress_unknown_disk_id_warnings=false --kudu_max_row_batches=0 --kudu_scanner_keep_alive_period_us=15000000 --kudu_scanner_keep_alive_period_sec=15 --kudu_scanner_timeout_sec=60 --pick_only_leaders_for_tests=false --kudu_session_timeout_seconds=60 --enable_phj_probe_side_filtering=true --accepted_cnxn_queue_depth=10000 --enable_ldap_auth=false --internal_principals_whitelist=hdfs --kerberos_reinit_interval=60 --ldap_allow_anonymous_binds=false --ldap_baseDN= --ldap_bind_pattern= --ldap_ca_certificate= --ldap_domain= --ldap_manual_config=false --ldap_passwords_in_clear_ok=false --ldap_tls=false --ldap_uri= --sasl_path= --rpc_cnxn_attempts=10 --rpc_cnxn_retry_interval_ms=2000 --disk_spill_encryption=false --insert_inherit_permissions=false --datastream_sender_timeout_ms=120000 --max_cached_file_handles=0 --max_free_io_buffers=128 --min_buffer_size=1024 --num_disks=0 --num_remote_hdfs_io_threads=8 --num_s3_io_threads=16 --num_threads_per_disk=0 --read_size=8388608 --backend_client_connection_num_retries=3 --backend_client_rpc_timeout_ms=300000 --catalog_client_connection_num_retries=3 --catalog_client_rpc_timeout_ms=0 --catalog_service_host=localhost --cgroup_hierarchy_path= --coordinator_rpc_threads=12 --enable_rm=false --enable_webserver=true --llama_addresses= --llama_callback_port=28000 --llama_host= --llama_max_request_attempts=5 --llama_port=15000 --llama_registration_timeout_secs=30 --llama_registration_wait_secs=3 --num_hdfs_worker_threads=16 --resource_broker_cnxn_attempts=1 --resource_broker_cnxn_retry_interval_ms=3000 --resource_broker_recv_timeout=0 --resource_broker_send_timeout=0 --staging_cgroup=impala_staging --state_store_host=<FQDN> --state_store_subscriber_port=23020 --use_statestore=true --s3a_access_key_cmd= --s3a_secret_key_cmd= --local_library_dir=/tmp --serialize_batch=false --status_report_interval=5 --max_filter_error_rate=0.75 --num_threads_per_core=3 --use_local_tz_for_unix_timestamp_conversions=false --scratch_dirs=/tmp --queue_wait_timeout_ms=60000 --max_vcore_oversubscription_ratio=2.5 --rm_mem_expansion_timeout_ms=5000 --rm_always_use_defaults=false --rm_default_cpu_vcores=2 --rm_default_memory=4G --default_pool_max_queued=200 --default_pool_max_requests=-1 --default_pool_mem_limit= --disable_pool_max_requests=false --disable_pool_mem_limits=false --fair_scheduler_allocation_path= --llama_site_path= --require_username=false --disable_admission_control=false --log_mem_usage_interval=0 --authorization_policy_file= --authorization_policy_provider_class=org.apache.sentry.provider.common.HadoopGroupResourceAuthorizationProvider --authorized_proxy_user_config= --authorized_proxy_user_config_delimiter=, --server_name= --abort_on_failed_audit_event=true --abort_on_failed_lineage_event=true --audit_event_log_dir= --be_service_threads=64 --beeswax_port=21000 --cancellation_thread_pool_size=5 --default_query_options= --fe_service_threads=64 --hs2_port=21050 --idle_query_timeout=0 --idle_session_timeout=0 --lineage_event_log_dir= --local_nodemanager_url= --log_query_to_file=true --max_audit_event_log_file_size=5000 --max_lineage_log_file_size=5000 --max_profile_log_file_size=5000 --max_profile_log_files=10 --max_result_cache_size=100000 --profile_log_dir= --query_log_size=25 --ssl_client_ca_certificate= --ssl_private_key= --ssl_private_key_password_cmd= --ssl_server_certificate= --statestore_subscriber_cnxn_attempts=10 --statestore_subscriber_cnxn_retry_interval_ms=3000 --statestore_subscriber_timeout_seconds=30 --state_store_port=24000 --statestore_heartbeat_frequency_ms=1000 --statestore_heartbeat_tcp_timeout_seconds=3 --statestore_max_missed_heartbeats=10 --statestore_num_heartbeat_threads=10 --statestore_num_update_threads=10 --statestore_update_frequency_ms=2000 --statestore_update_tcp_timeout_seconds=300 --force_lowercase_usernames=false --num_cores=0 --web_log_bytes=1048576 --non_impala_java_vlog=0 --periodic_counter_update_period_ms=500 --enable_webserver_doc_root=true --webserver_authentication_domain= --webserver_certificate_file= --webserver_doc_root=/opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/lib/impala --webserver_interface= --webserver_password_file= --webserver_port=25020 --webserver_private_key_file= --webserver_private_key_password_cmd= --webserver_x_frame_options=DENY --flagfile=/var/run/cloudera-scm-agent/process/811-impala-CATALOGSERVER/impala-conf/catalogserver_flags --fromenv= --tryfromenv= --undefok= --tab_completion_columns=80 --tab_completion_word= --help=false --helpfull=false --helpmatch= --helpon= --helppackage=false --helpshort=false --helpxml=false --version=false --alsologtoemail= --alsologtostderr=false --drop_log_memory=true --log_backtrace_at= --log_dir=/var/log/catalogd --log_link= --log_prefix=true --logbuflevel=0 --logbufsecs=30 --logemaillevel=999 --logmailer=/bin/mail --logtostderr=false --max_log_size=200 --minloglevel=0 --stderrthreshold=4 --stop_logging_if_full_disk=false --symbolize_stacktrace=true --v=1 --vmodule= Apr 19, 10:06:47.710 AM INFO init.cc:207 Cpu Info: Model: Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz Cores: 32 L1 Cache: 32.00 KB (Line: 64.00 B) L2 Cache: 256.00 KB (Line: 64.00 B) L3 Cache: 20.00 MB (Line: 64.00 B) Hardware Supports: ssse3 sse4_1 sse4_2 popcnt Apr 19, 10:06:47.710 AM INFO init.cc:208 Disk Info: Num disks 5: sda (rotational=true) sdc (rotational=true) sdb (rotational=true) sdd (rotational=true) dm- (rotational=true) Apr 19, 10:06:47.710 AM INFO init.cc:209 Physical Memory: 251.89 GB Apr 19, 10:06:47.710 AM INFO init.cc:210 OS version: Linux version 2.6.32-573.el6.x86_64 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Thu Jul 23 15:44:03 UTC 2015 Clock: clocksource: 'tsc', clockid_t: CLOCK_MONOTONIC Apr 19, 10:06:47.710 AM INFO init.cc:211 Process ID: 8626 Apr 19, 10:06:48.686 AM INFO webserver.cc:219 Starting webserver on 0.0.0.0:25020 Apr 19, 10:06:48.686 AM INFO webserver.cc:233 Document root: /opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/lib/impala Apr 19, 10:06:48.686 AM INFO webserver.cc:317 Webserver started Apr 19, 10:06:48.728 AM INFO GlogAppender.java:123 Logging initialized. Impala: VLOG, All other: INFO Apr 19, 10:06:48.731 AM INFO JniCatalog.java:99 Java Version Info: Java(TM) SE Runtime Environment (1.7.0_67-b01) Apr 19, 10:06:49.196 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 0 Apr 19, 10:06:49.206 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.324 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 1 Apr 19, 10:06:49.324 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.324 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 1 Apr 19, 10:06:49.325 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.333 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 2 Apr 19, 10:06:49.334 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.334 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 2 Apr 19, 10:06:49.334 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.341 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 3 Apr 19, 10:06:49.342 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.342 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 3 Apr 19, 10:06:49.343 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.351 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 4 Apr 19, 10:06:49.351 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.351 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 4 Apr 19, 10:06:49.352 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.359 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 5 Apr 19, 10:06:49.360 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.656 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 4 Apr 19, 10:06:49.656 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.663 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 6 Apr 19, 10:06:49.663 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.690 AM INFO CatalogServiceCatalog.java:549 Loading native functions for database: bedrock Apr 19, 10:06:49.690 AM INFO CatalogServiceCatalog.java:575 Loading Java functions for database: bedrock Apr 19, 10:06:49.709 AM INFO CatalogServiceCatalog.java:549 Loading native functions for database: default Apr 19, 10:06:49.709 AM INFO CatalogServiceCatalog.java:575 Loading Java functions for database: default Apr 19, 10:06:49.714 AM INFO HiveMetaStoreClient.java:512 Closed a connection to metastore, current connections: 5 Apr 19, 10:06:49.715 AM INFO TableLoadingMgr.java:278 Loading next table. Remaining items in queue: 0 Apr 19, 10:06:49.716 AM INFO TableLoadingMgr.java:278 Loading next table. Remaining items in queue: 0 Apr 19, 10:06:49.717 AM INFO TableLoader.java:56 Loading metadata for: default.test Apr 19, 10:06:49.717 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 3 Apr 19, 10:06:49.717 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.717 AM INFO TableLoader.java:56 Loading metadata for: bedrock.student_testing_1 Apr 19, 10:06:49.723 AM INFO statestore-subscriber.cc:178 Starting statestore subscriber Apr 19, 10:06:49.725 AM INFO thrift-server.cc:445 ThriftServer 'StatestoreSubscriber' started on port: 23020 Apr 19, 10:06:49.725 AM INFO statestore-subscriber.cc:189 Registering with statestore Apr 19, 10:06:49.725 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 6 Apr 19, 10:06:49.725 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.725 AM INFO MetaStoreClientPool.java:56 Creating MetaStoreClient. Pool Size = 2 Apr 19, 10:06:49.726 AM INFO HiveMetaStoreClient.java:386 Trying to connect to metastore with URI thrift://<hostname-1>:9083 Apr 19, 10:06:49.731 AM INFO status.cc:111 Couldn't open transport for <FQDN>:24000 (No more data to read.) @ 0x8133b9 (unknown) @ 0xd5052f (unknown) @ 0xd50752 (unknown) @ 0xa0462b (unknown) @ 0xa04c50 (unknown) @ 0xb09647 (unknown) @ 0xb0b5b0 (unknown) @ 0x7e62c3 (unknown) @ 0x7df2ff (unknown) @ 0x7de816 (unknown) @ 0x7f532f738d1d __libc_start_main @ 0x7de4dd (unknown) Apr 19, 10:06:49.731 AM INFO thrift-client.cc:58 Unable to connect to <FQDN>:24000 Apr 19, 10:06:49.731 AM INFO thrift-client.cc:64 (Attempt 1 of 10) Apr 19, 10:06:49.733 AM INFO HiveMetaStoreClient.java:431 Opened a connection to metastore, current connections: 7 Apr 19, 10:06:49.733 AM INFO HiveMetaStoreClient.java:483 Connected to metastore. Apr 19, 10:06:49.751 AM INFO catalog-server.cc:313 Publishing update: DATABASE:bedrock@1 Apr 19, 10:06:49.751 AM INFO catalog-server.cc:313 Publishing update: TABLE:bedrock.student_testing_1@2 Apr 19, 10:06:49.751 AM INFO catalog-server.cc:313 Publishing update: DATABASE:default@3 Apr 19, 10:06:49.751 AM INFO catalog-server.cc:313 Publishing update: TABLE:default.test@4 Apr 19, 10:06:49.751 AM INFO catalog-server.cc:313 Publishing update: CATALOG:ad685743df72412a:9aa412bb03b25186@4 Apr 19, 10:06:49.879 AM INFO Table.java:158 Loading column stats for table: test Apr 19, 10:06:49.879 AM INFO Table.java:158 Loading column stats for table: student_testing_1 Apr 19, 10:06:49.899 AM INFO HdfsTable.java:1021 load table from Hive Metastore: default.test Apr 19, 10:06:49.899 AM INFO HdfsTable.java:1021 load table from Hive Metastore: bedrock.student_testing_1 Apr 19, 10:06:50.155 AM INFO HiveMetaStoreClient.java:512 Closed a connection to metastore, current connections: 6 Apr 19, 10:06:50.156 AM INFO HiveMetaStoreClient.java:512 Closed a connection to metastore, current connections: 5 Apr 19, 10:06:52.731 AM INFO thrift-util.cc:109 TSocket::write_partial() send() <Host: <FQDN> Port: 24000>Broken pipe Apr 19, 10:06:52.732 AM INFO client-cache.h:259 client 0x9f0b040 unexpected exception: write() send(): Broken pipe, type=N6apache6thrift9transport19TTransportExceptionE Apr 19, 10:06:52.732 AM INFO client-cache.cc:78 ReopenClient(): re-creating client for <FQDN>:24000 Apr 19, 10:06:52.735 AM INFO status.cc:111 Couldn't open transport for <FQDN>:24000 (No more data to read.) @ 0x8133b9 (unknown) @ 0xd5052f (unknown) @ 0xd50752 (unknown) @ 0xa0462b (unknown) @ 0xa05079 (unknown) @ 0xb0ee78 (unknown) @ 0xb0f0ee (unknown) @ 0xb099bf (unknown) @ 0xb0b5b0 (unknown) @ 0x7e62c3 (unknown) @ 0x7df2ff (unknown) @ 0x7de816 (unknown) @ 0x7f532f738d1d __libc_start_main @ 0x7de4dd (unknown) Apr 19, 10:06:52.735 AM INFO thrift-client.cc:58 Unable to connect to <FQDN>:24000 Apr 19, 10:06:52.735 AM INFO thrift-client.cc:64 (Attempt 1 of 10) Apr 19, 10:06:55.735 AM INFO thrift-util.cc:109 TSocket::write_partial() send() <Host: <FQDN> Port: 24000>Broken pipe Apr 19, 10:06:55.736 AM INFO status.cc:44 RPC Error: write() send(): Broken pipe @ 0x81228a (unknown) @ 0xb0f40b (unknown) @ 0xb099bf (unknown) @ 0xb0b5b0 (unknown) @ 0x7e62c3 (unknown) @ 0x7df2ff (unknown) @ 0x7de816 (unknown) @ 0x7f532f738d1d __libc_start_main @ 0x7de4dd (unknown) Apr 19, 10:06:55.736 AM INFO client-cache.cc:163 Broken Connection, destroy client for <FQDN>:24000 Apr 19, 10:06:55.736 AM INFO statestore-subscriber.cc:195 statestore registration unsuccessful: RPC Error: write() send(): Broken pipe Apr 19, 10:06:55.736 AM FATAL catalogd-main.cc:76 RPC Error: write() send(): Broken pipe . Impalad exiting.
... View more
04-19-2017
09:00 AM
Hello , After kerberizing the cluster impala daemons fails to start . I see Impala state store is up but when looked at the Role logs it shows GSS execption . Tried to go through some of the community help as below, made sure the encryption types are fine but that did not help resolve the issue One more observation below , the error doesnot give details as to what the error could be related to and just calls out saying Unspecified GSS faliure . "TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information ()" reference : https://community.cloudera.com/t5/Interactive-Short-cycle-SQL/kerberos-authentication-failure-GSSAPI-Failure-gss-accept-sec/m-p/24707#M742 snap shot from CM: =============== Error ==== Apr 19, 9:45:16.664 AM INFO logging.cc:117 stdout will be logged to this file. Apr 19, 9:45:16.664 AM ERROR logging.cc:118 stderr will be logged to this file. Apr 19, 9:45:16.664 AM INFO minidump.cc:199 Setting minidump size limit to 20971520. Apr 19, 9:45:16.664 AM INFO atomicops-internals-x86.cc:93 vendor GenuineIntel family 6 model 15 sse2 1 cmpxchg16b 1 Apr 19, 9:45:16.673 AM INFO authentication.cc:678 Using internal kerberos principal "impala/CM-PR-BGD-M02.telemovil.com.sv@TELEMOVIL.COM.SV" Apr 19, 9:45:16.673 AM INFO authentication.cc:1013 Internal communication is authenticated with Kerberos Apr 19, 9:45:16.673 AM INFO authentication.cc:798 Waiting for Kerberos ticket for principal: impala/CM-PR-BGD-M02.telemovil.com.sv@TELEMOVIL.COM.SV Apr 19, 9:45:16.673 AM INFO authentication.cc:494 Registering impala/CM-PR-BGD-M02.telemovil.com.sv@TELEMOVIL.COM.SV, keytab file /var/run/cloudera-scm-agent/process/804-impala-STATESTORE/impala.keytab Apr 19, 9:45:16.780 AM INFO authentication.cc:800 Kerberos ticket granted to impala/CM-PR-BGD-M02.telemovil.com.sv@TELEMOVIL.COM.SV Apr 19, 9:45:16.780 AM INFO authentication.cc:678 Using external kerberos principal "impala/CM-PR-BGD-M02.telemovil.com.sv@TELEMOVIL.COM.SV" Apr 19, 9:45:16.780 AM INFO authentication.cc:1029 External communication is authenticated with Kerberos Apr 19, 9:45:16.780 AM INFO init.cc:201 statestored version 2.6.0-cdh5.8.4 RELEASE (build 207450616f75adbe082a4c2e1145a2384da83fa6) Built on Mon, 06 Feb 2017 14:33:04 PST Apr 19, 9:45:16.780 AM INFO init.cc:202 Using hostname: CM-PR-BGD-M02 Apr 19, 9:45:16.780 AM INFO logging.cc:153 Flags (see also /varz are on debug webserver): --catalog_service_port=26000 --load_catalog_in_background=false --num_metadata_loading_threads=16 --sentry_config= --asm_module_dir= --disable_optimization_passes=false --dump_ir=false --opt_module_dir= --perf_map=false --print_llvm_ir_instruction_count=false --unopt_module_dir= --abort_on_config_error=true --be_port=22000 --be_principal= --compact_catalog_topic=false --disable_mem_pools=false --enable_accept_queue_server=true --enable_process_lifetime_heap_profiling=false --heap_profile_dir= --hostname=CM-PR-BGD-M02 --inc_stats_size_limit_bytes=209715200 --keytab_file=/var/run/cloudera-scm-agent/process/804-impala-STATESTORE/impala.keytab --krb5_conf= --krb5_debug_file= --load_auth_to_local_rules=false --max_minidumps=9 --mem_limit=80% --minidump_path=/var/log/impala-minidumps/statestored --minidump_size_limit_hint_kb=20480 --principal=impala/CM-PR-BGD-M02.telemovil.com.sv@TELEMOVIL.COM.SV --redaction_rules_file= --max_log_files=10 --pause_monitor_sleep_time_ms=500 --pause_monitor_warn_threshold_ms=10000 --log_filename=statestored --redirect_stdout_stderr=true --data_source_batch_size=1024 --exchg_node_buffer_size_bytes=10485760 --enable_partitioned_aggregation=true --enable_partitioned_hash_join=true --enable_probe_side_filtering=true --enable_quadratic_probing=true --skip_lzo_version_check=false --convert_legacy_hive_parquet_utc_timestamps=false --max_page_header_size=8388608 --parquet_min_filter_reject_ratio=0.10000000000000001 --max_row_batches=0 --runtime_filter_wait_time_ms=1000 --suppress_unknown_disk_id_warnings=false --kudu_max_row_batches=0 --kudu_scanner_keep_alive_period_us=15000000 --kudu_scanner_keep_alive_period_sec=15 --kudu_scanner_timeout_sec=60 --pick_only_leaders_for_tests=false --kudu_session_timeout_seconds=60 --enable_phj_probe_side_filtering=true --accepted_cnxn_queue_depth=10000 --enable_ldap_auth=false --internal_principals_whitelist=hdfs --kerberos_reinit_interval=60 --ldap_allow_anonymous_binds=false --ldap_baseDN= --ldap_bind_pattern= --ldap_ca_certificate= --ldap_domain= --ldap_manual_config=false --ldap_passwords_in_clear_ok=false --ldap_tls=false --ldap_uri= --sasl_path= --rpc_cnxn_attempts=10 --rpc_cnxn_retry_interval_ms=2000 --disk_spill_encryption=false --insert_inherit_permissions=false --datastream_sender_timeout_ms=120000 --max_cached_file_handles=0 --max_free_io_buffers=128 --min_buffer_size=1024 --num_disks=0 --num_remote_hdfs_io_threads=8 --num_s3_io_threads=16 --num_threads_per_disk=0 --read_size=8388608 --backend_client_connection_num_retries=3 --backend_client_rpc_timeout_ms=300000 --catalog_client_connection_num_retries=3 --catalog_client_rpc_timeout_ms=0 --catalog_service_host=localhost --cgroup_hierarchy_path= --coordinator_rpc_threads=12 --enable_rm=false --enable_webserver=true --llama_addresses= --llama_callback_port=28000 --llama_host= --llama_max_request_attempts=5 --llama_port=15000 --llama_registration_timeout_secs=30 --llama_registration_wait_secs=3 --num_hdfs_worker_threads=16 --resource_broker_cnxn_attempts=1 --resource_broker_cnxn_retry_interval_ms=3000 --resource_broker_recv_timeout=0 --resource_broker_send_timeout=0 --staging_cgroup=impala_staging --state_store_host=localhost --state_store_subscriber_port=23000 --use_statestore=true --s3a_access_key_cmd= --s3a_secret_key_cmd= --local_library_dir=/tmp --serialize_batch=false --status_report_interval=5 --max_filter_error_rate=0.75 --num_threads_per_core=3 --use_local_tz_for_unix_timestamp_conversions=false --scratch_dirs=/tmp --queue_wait_timeout_ms=60000 --max_vcore_oversubscription_ratio=2.5 --rm_mem_expansion_timeout_ms=5000 --rm_always_use_defaults=false --rm_default_cpu_vcores=2 --rm_default_memory=4G --default_pool_max_queued=200 --default_pool_max_requests=-1 --default_pool_mem_limit= --disable_pool_max_requests=false --disable_pool_mem_limits=false --fair_scheduler_allocation_path= --llama_site_path= --require_username=false --disable_admission_control=false --log_mem_usage_interval=0 --authorization_policy_file= --authorization_policy_provider_class=org.apache.sentry.provider.common.HadoopGroupResourceAuthorizationProvider --authorized_proxy_user_config= --authorized_proxy_user_config_delimiter=, --server_name= --abort_on_failed_audit_event=true --abort_on_failed_lineage_event=true --audit_event_log_dir= --be_service_threads=64 --beeswax_port=21000 --cancellation_thread_pool_size=5 --default_query_options= --fe_service_threads=64 --hs2_port=21050 --idle_query_timeout=0 --idle_session_timeout=0 --lineage_event_log_dir= --local_nodemanager_url= --log_query_to_file=true --max_audit_event_log_file_size=5000 --max_lineage_log_file_size=5000 --max_profile_log_file_size=5000 --max_profile_log_files=10 --max_result_cache_size=100000 --profile_log_dir= --query_log_size=25 --ssl_client_ca_certificate= --ssl_private_key= --ssl_private_key_password_cmd= --ssl_server_certificate= --statestore_subscriber_cnxn_attempts=10 --statestore_subscriber_cnxn_retry_interval_ms=3000 --statestore_subscriber_timeout_seconds=30 --state_store_port=24000 --statestore_heartbeat_frequency_ms=1000 --statestore_heartbeat_tcp_timeout_seconds=3 --statestore_max_missed_heartbeats=10 --statestore_num_heartbeat_threads=10 --statestore_num_update_threads=10 --statestore_update_frequency_ms=2000 --statestore_update_tcp_timeout_seconds=300 --force_lowercase_usernames=false --num_cores=0 --web_log_bytes=1048576 --non_impala_java_vlog=0 --periodic_counter_update_period_ms=500 --enable_webserver_doc_root=true --webserver_authentication_domain= --webserver_certificate_file= --webserver_doc_root=/opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/lib/impala --webserver_interface= --webserver_password_file= --webserver_port=25010 --webserver_private_key_file= --webserver_private_key_password_cmd= --webserver_x_frame_options=DENY --flagfile=/var/run/cloudera-scm-agent/process/804-impala-STATESTORE/impala-conf/state_store_flags --fromenv= --tryfromenv= --undefok= --tab_completion_columns=80 --tab_completion_word= --help=false --helpfull=false --helpmatch= --helpon= --helppackage=false --helpshort=false --helpxml=false --version=false --alsologtoemail= --alsologtostderr=false --drop_log_memory=true --log_backtrace_at= --log_dir=/var/log/statestore --log_link= --log_prefix=true --logbuflevel=0 --logbufsecs=30 --logemaillevel=999 --logmailer=/bin/mail --logtostderr=false --max_log_size=200 --minloglevel=0 --stderrthreshold=4 --stop_logging_if_full_disk=false --symbolize_stacktrace=true --v=1 --vmodule= Apr 19, 9:45:16.780 AM INFO init.cc:207 Cpu Info: Model: Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz Cores: 32 L1 Cache: 32.00 KB (Line: 64.00 B) L2 Cache: 256.00 KB (Line: 64.00 B) L3 Cache: 20.00 MB (Line: 64.00 B) Hardware Supports: ssse3 sse4_1 sse4_2 popcnt Apr 19, 9:45:16.780 AM INFO init.cc:208 Disk Info: Num disks 5: sda (rotational=true) sdc (rotational=true) sdb (rotational=true) sdd (rotational=true) dm- (rotational=true) Apr 19, 9:45:16.780 AM INFO init.cc:209 Physical Memory: 251.89 GB Apr 19, 9:45:16.780 AM INFO init.cc:210 OS version: Linux version 2.6.32-573.el6.x86_64 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) #1 SMP Thu Jul 23 15:44:03 UTC 2015 Clock: clocksource: 'tsc', clockid_t: CLOCK_MONOTONIC Apr 19, 9:45:16.780 AM INFO init.cc:211 Process ID: 3489 Apr 19, 9:45:16.780 AM INFO webserver.cc:219 Starting webserver on 0.0.0.0:25010 Apr 19, 9:45:16.780 AM INFO webserver.cc:233 Document root: /opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/lib/impala Apr 19, 9:45:16.781 AM INFO webserver.cc:317 Webserver started Apr 19, 9:45:16.784 AM INFO thrift-server.cc:445 ThriftServer 'StatestoreService' started on port: 24000 Apr 19, 9:45:19.259 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.259 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.344 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.344 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.364 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.364 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.396 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.396 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.404 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:19.404 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.263 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.263 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.347 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.347 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.366 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.367 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.401 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.401 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.407 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:22.407 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:28.539 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:28.539 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.153 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.153 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.198 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.198 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.203 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.203 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.207 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:29.207 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:31.543 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:31.543 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.155 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.156 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.200 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.201 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.206 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.206 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.209 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:32.209 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:38.821 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:38.821 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:39.986 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:39.987 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:39.987 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:39.988 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:39.993 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:39.993 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:40.006 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:40.007 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:41.824 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:41.824 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:42.989 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:42.990 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:42.990 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:42.991 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:42.996 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:42.996 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:43.009 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:43.009 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:46.780 AM INFO logging-support.cc:136 Old log file deleted during log rotation: /var/log/statestore/statestored.CM-PR-BGD-M02.impala.log.INFO.20170412-102422.26964 Apr 19, 9:45:46.780 AM INFO logging-support.cc:136 Old log file deleted during log rotation: /var/log/statestore/statestored.CM-PR-BGD-M02.impala.log.WARNING.20170412-102422.26964 Apr 19, 9:45:46.780 AM INFO logging-support.cc:136 Old log file deleted during log rotation: /var/log/statestore/statestored.CM-PR-BGD-M02.impala.log.ERROR.20170412-102422.26964 Apr 19, 9:45:50.107 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:50.107 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.879 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.880 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.886 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.887 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.905 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.905 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.964 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:51.964 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:53.111 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:53.111 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.882 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.883 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.889 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.890 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.908 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.908 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.967 AM ERROR authentication.cc:155 SASL message (Kerberos (internal)): GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () Apr 19, 9:45:54.967 AM INFO thrift-util.cc:109 TAcceptQueueServer: Caught TException: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information ()
... View more
Labels:
- Labels:
-
Apache Impala
-
Kerberos
03-30-2017
01:43 PM
Followed the same document "Configure Solrcloud Deployment for Kerberos" . I already had a solr cloud deployed under /opt/lucidworks/... and want tthe existing solrcloud infrastructure to store Ranger audits . I happen to kerebrize the solrcloud and same steps layed out in the document was followed . Please note I am able to create collections using solr REST API's & also using ambari-infra-solr-client from command line by passing jaas file , but not able to do so when ranger does all the pre checks while restarting.
... View more
03-29-2017
01:53 PM
@Jonas Straub any input on the above .
... View more
03-28-2017
10:37 PM
Hello , I am trying to push ranger audits to external kerberized solr cloud . When I configure Ranger to audit to external solrcloud which is kerebrized , I get an "401 authentication error" . I am able to create collections using a solr rest api's , so I feel configs are good from solr end . When Ranger is restarted , I see it uses a ambari-infra-solr-client to talk to the external solr cluster and fails giving 401 authentication error as the client is unable to authenticate it self . Running the same script and adding "-jf <path-to-jaas-conf>" enables me to create collection from the command line . I am trying to see , how can I achive this via ambari . Am I missing any configs on Ranger end which would flag ambari-infra to pass the jaas conf file ? Below is the output when I trigger a restart from ambari: Running the command from command line and passing the jaas conf as a -jf parameter runs fine and the collection gets created . [root@hdl-n1 ~]# /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr --create-collection -jf /etc/ambari-infra-solr/conf/infra_solr_jaas.conf --collection ranger_audits3 --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding
Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=hdl-n1.zalonilabs.com
Client environment:java.version=1.7.0_67
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/usr/java/jdk1.7.0_67/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.4.2.0.136.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=2.6.32-642.el6.x86_64
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/root
Initiating client connection, connectString=hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@7a04c4aa
Waiting for client to connect to ZooKeeper
successfully logged in.
TGT refresh thread started.
Client will use GSSAPI as SASL mechanism.
TGT valid starting at: Sat Mar 25 13:29:52 EDT 2017
TGT expires: Sun Mar 26 13:29:52 EDT 2017
TGT refresh sleeping until: Sun Mar 26 09:01:47 EDT 2017
Opening socket connection to server hdl-n3.zalonilabs.com/10.11.13.168:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
Socket connection established to hdl-n3.zalonilabs.com/10.11.13.168:2181, initiating session
Session establishment complete on server hdl-n3.zalonilabs.com/10.11.13.168:2181, sessionid = 0x35af687cfb80050, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@7a53c84a name:ZooKeeperConnection Watcher:hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Watcher org.apache.solr.common.cloud.ConnectionManager@7a53c84a name:ZooKeeperConnection Watcher:hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr got event WatchedEvent state:SaslAuthenticated type:None path:null path:null type:None
Setting up SPNego auth with config: /etc/ambari-infra-solr/conf/infra_solr_jaas.conf
Using default ZkCredentialsProvider
Initiating client connection, connectString=hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr sessionTimeout=10000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@3b0f2591
Waiting for client to connect to ZooKeeper
Client will use GSSAPI as SASL mechanism.
Opening socket connection to server hdl-n1.zalonilabs.com/10.11.13.166:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
Socket connection established to hdl-n1.zalonilabs.com/10.11.13.166:2181, initiating session
Session establishment complete on server hdl-n1.zalonilabs.com/10.11.13.166:2181, sessionid = 0x15af687cf9e004e, negotiated timeout = 10000
Watcher org.apache.solr.common.cloud.ConnectionManager@54274d27 name:ZooKeeperConnection Watcher:hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Updating cluster state from ZooKeeper...
Watcher org.apache.solr.common.cloud.ConnectionManager@54274d27 name:ZooKeeperConnection Watcher:hdl-n3.zalonilabs.com:2181,hdl-n2.zalonilabs.com:2181,hdl-n1.zalonilabs.com:2181/solr got event WatchedEvent state:SaslAuthenticated type:None path:null path:null type:None
A collections change: [WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/collections], has occurred - updating...
A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/clusterstate.json], has occurred - updating... (live nodes size: [2])
Collection 'ranger_audits3' created.
Return code: 0 Thanks, Jagdish
... View more
Labels:
- Labels:
-
Apache Ranger
-
Apache Solr
02-07-2017
01:06 PM
@Ajay @prsingh Ranger HDFS policy were configured was but since Hadoop ACL's are in place it should have let falcon user in . As a workaround we tried a different path and that worked . Unable to replicate the same again . I guess we can close this ticket for now . Thanks ...
... View more
02-03-2017
08:39 PM
ERROR from falcon application log ================= After installing falcon I see SERVICE UNAVAILABLE ERROR on falcon UI . On restart of falcon service see the below error which points hdfs:hdfs is user & group ownership and falcon cannot execute . 017-02-03 20:10:46,874 INFO ipc.Server (Server.java:logException(2394)) - IPC Server handler 105 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 192.
168.1.95:43132 Call#0 Retry#0: org.apache.hadoop.security.AccessControlException: Permission denied: user=falcon, access=EXECUTE, inode="/apps/falcon/extensions":hdfs:hdfs:drwx----- Below is the output of hadoop fs -ls ... [root@hdpkdc ~]# hadoop fs -ls /apps/falcon
Found 1 items
drwxr-xr-x - falcon users 0 2017-02-02 20:53 /apps/falcon/extensions
[root@hdpkdc ~]# hadoop fs -ls /apps/falcon/extensions
Found 4 items
drwxr-xr-x - falcon users 0 2017-02-02 20:53 /apps/falcon/extensions/hdfs-mirroring
drwxr-xr-x - falcon users 0 2017-02-02 20:53 /apps/falcon/extensions/hdfs-snapshot-mirroring
drwxr-xr-x - falcon users 0 2017-02-02 20:53 /apps/falcon/extensions/hive-mirroring
drwxrwx--- - falcon users 0 2017-02-02 20:53 /apps/falcon/extensions/mirroring Thanks, Jagdish
... View more
Labels:
- Labels:
-
Apache Falcon
01-04-2017
10:09 PM
@rgangappa after restarting ambari I saw the same issue , but now it works . I still am not sure why this happened.
... View more
12-22-2016
08:23 PM
Installing Ranger from using Ambari Wizard gets stuck at stage where masters are assigned . After hitting Next the wizard does not move to the next screen. Ambari Version : 2.4.2 1. Which logs would provide details about the progress , couldn't find much details under ambari-server logs .
... View more
Labels:
- Labels:
-
Apache Ambari
08-30-2016
04:37 PM
@ssoldatov for some reason my syntax is not coming through proper . I did put in the zookeeper node and i want to use a specific keytab.
... View more
08-30-2016
03:23 AM
@ssoldatov I guess I did not pasted the syntax properly , below was the syntax I used . As the implementation is a standalone KDC , hence passing the keytab info with the syntax. /usr/hdp/current/phoenix-client/bin/sqlline.py <ZOOKEEPER-NODE>:2181:/hbase-secure:<USERNAME>@<REALM>:<KEYTAB PATH>
... View more
08-29-2016
07:12 PM
user tickets are valid and have been verified , but when the below commands are executed below is the error that comes up . ================ [<username@<hostname> ~]$ export HBASE_CONF_PATH=/etc/hbase/conf:/etc/hadoop/conf [<username@<hostname> ~]$ /usr/hdp/current/phoenix-client/bin/sqlline.py <zookeeper-node>:2181:/hbase-secure:<user-principal>:<user keytab> =============== Mon Aug 29 13:44:54 CDT 2016, RpcRetryingCaller{globalStartTime=1472495354480, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Couldn't setup connection for <USERNAME>@<REALM> to hbase/<FQDN>@<REALM>
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3917)
at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:441)
at org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:463)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:815)
... 31 more
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Couldn't setup connection for <USERNAME>@<REALM> to hbase/<FQDN>@<REALM>
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1533)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1553)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1704)
at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
... 35 more
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Couldn't setup connection for <USERNAME>@<REALM> to hbase/<FQDN>@<REALM>
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:50918)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1564)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1502)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1524)
... 39 more
Caused by: java.io.IOException: Couldn't setup connection for <USERNAME>@<REALM> to hbase/<FQDN>@<REALM>
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:665)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:637)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:745)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
... 44 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:734)
... 48 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:710)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
... 57 more
Caused by: KrbException: Fail to create credential. (63) - No service creds
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:282)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:456)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
... 60 more
sqlline version 1.1.8
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-26-2016
04:47 AM
1 Kudo
@Jonas Straub : Thanks Jonas , the above issue was more related to a hard coded value for index lock in solr.in.sh but I am also facing issue as you mentioned above on one of the solr cloud nodes. It kind of makes sense to increase the sleep # for solr graceful shutdown process .
... View more
03-25-2016
02:47 PM
@Jonas Straub Hello Jonas , thanks for the article its a well documented steps for having ranger plugin installed for solr. I have tried to follow the steps but have some hiccups wanted to see if you can help me out here . As mentioned in the document , I have created a policy for solr , but am unable to test the connection , getting the below error. Also after, I made changed to install.properties files, enabling the plugin and restarting solr , i do not see plugin synced to ranger ui. Thanks, Jagdish Saripella
... View more
03-11-2016
01:22 PM
1 Kudo
Hello, We are testing SOLR 5.2.1 , below error pops up when creating a collection with more than 1 shard & more than 1 replica. The process fails after it creates 1 shard as it is unable to remove the write lock that is created under /solr/index on hdfs. 745698 [OverseerThreadFactory-4-thread-1-processing-{node_name=<hostname>:8983_solr}] ERROR org.apache.solr.cloud.OverseerCollectionProcessor [Test ] – Error from shard: http://<hostname>:8983/solr
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://<hostname>:8983/solr: Error CREATEing SolrCore 'Test_shard1_replica1': Unable to create core [Test_shard1_replica1] Caused by: Index locked for write for core Test_shard1_replica1
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:218)
at org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:183)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Thanks, Jagdish
... View more
Labels:
- Labels:
-
Apache Solr
03-08-2016
07:25 PM
1 Kudo
@Neeraj Sabharwal @vpoornalingam @Artem Ervits Update :looking at admin > stack & versions > versions found that HDP( on one of the environments) was not finalized to 2.3 . Finalizing the stack now picks up ATLAS software . Thanks all for your help.
... View more
03-04-2016
02:44 PM
1 Kudo
Hello I have ATLAS 0.5.0 version deployed . I want to enable user login to the ATLAS web portal . Is there a way I can enable this ?
... View more
Labels:
- Labels:
-
Apache Atlas
03-01-2016
02:38 PM
Neeraj, I was able to install ATLAS using HDP 2.3.2 & Ambari 2.1.2 in one of the environments .. @Neeraj Sabharwal
... View more
03-01-2016
12:40 AM
1 Kudo
@Artem Ervits thanks for you all helping me , I have tried as suggested but that doesn't work either . Can you also point me to any documentation that might show how Ambari looks for these packages and pulls them to show it on the UI.As mentioned in the original question , i am also trying to get solr installed , but the latest version of ambari does not show SOLR either, so its an intresting situation and will give me good oppurtunity to understand workings of ambari better.
... View more
02-29-2016
09:11 PM
@vpoornalingam & @Artem Ervits restarted Ambari-server but still ATLAS doesnot show up .
... View more
02-29-2016
05:11 PM
1 Kudo
@vpoornalingam & @Artem Ervits ENVIRONMENT 1: Atlas was installed through ambari on this environment. ls /var/lib/ambari-server/resources/stacks/HDP/2.3/services/ ACCUMULO FALCON HBASE HIVE KERBEROS MAHOUT PIG RANGER_KMS SPARK stack_advisor.py STORM YARN
ATLAS FLUME HDFS KAFKA KNOX OOZIE RANGER SLIDER SQOOP stack_advisor.pyc TEZ ZOOKEEPER ENVIRONMENT 2: This is where I am trying to install Atlas . ls /var/lib/ambari-server/resources/stacks/HDP/2.3/services/ ACCUMULO FALCON HBASE HIVE KERBEROS MAHOUT PIG RANGER_KMS SPARK stack_advisor.py TEZ ZOOKEEPER
ATLAS FLUME HDFS KAFKA KNOX OOZIE RANGER SLIDER SQOOP STORM YARN
... View more
02-29-2016
03:56 PM
3 Kudos
I have two questions both seem similar to me . 1. I have Ambari 2.1.2 installed on 2 environments . The same version of ambari is missing "Atlas" software under "ADD SERVICE" option . I am not sure why this would happen . 2. On a different environment we have Ambari 2.1.0 installed which show "SOLR" package, but on higher version 2.1.2 "SOLR" is missing . Thanks Jagdish
... View more
Labels:
- Labels:
-
Apache Ambari
02-22-2016
07:19 PM
1 Kudo
@Neeraj Sabharwal Sure .. was able to convince my team lead 🙂 to hold on the POC for now . Thanks for you response.
... View more
02-09-2016
06:58 PM
1 Kudo
Its Atlas 0.5.0 . This was the version that was shipped with HDP 2.3.0 . Can you paste the link to the demo here . Thanks, Jagdish Saripella
... View more
02-09-2016
06:44 PM
3 Kudos
I am looking to delete tags from existing Atlas instance . Apache or Hortonworks doesn't have a comprehensive documentation on how to delete tags using REST API (https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_data_governance/content/ch_app_metadata_store_ref.html) . Can you please help.
... View more
Labels:
- Labels:
-
Apache Atlas
01-27-2016
04:56 AM
1 Kudo
@Artem Ervits using one of the online examples . attached is the text data that is being uploaded raw_data = LOAD '/user/u1448739/hbase_text.txt' USING PigStorage(',') AS (
custno:chararray,
firstname:chararray,
lastname:chararray,
age:int,
profession:chararray); STORE raw_data INTO 'hbase://test' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage( 'test_data:firstname test_data:lastname test_data:age test_data:profession'); hbase tables: hbase(main):002:0> describe 'test'
DESCRIPTION ENABLED
'test', {NAME => 'test_data', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', R true
EPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0'
, TTL => 'FOREVER', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY =
> 'false', BLOCKCACHE => 'true'}
1 row(s) in 0.1910 seconds hbase-test.txt
... View more
01-27-2016
04:06 AM
1 Kudo
using a pig script to upload data . Below is the yarn app log : 2016-01-26 10:57:59,797 INFO [main-SendThread(am2rlccmrhdn04.r1-core.r1.aig.net:2181)] org.apache.zookeeper.ClientCnxn: Session establishment complete on server am2rlccmrhdn04.r1-core.r1.aig.net/10.175.68.14:2181, sessionid = 0x251c236ef7b0093, negotiated timeout = 30000
2016-01-26 10:57:59,924 INFO [main] org.apache.hadoop.hbase.mapreduce.TableOutputFormat: Created table instance for test
2016-01-26 10:57:59,951 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2016-01-26 10:58:00,413 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Number of splits :1
Total Length = 739
Input split[0]:
Length = 739
ClassName: org.apache.hadoop.mapreduce.lib.input.FileSplit
Locations:
-----------------------
2016-01-26 10:58:00,443 INFO [main] org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader: Current split being processed hdfs://dr-gfat/user/u1448739/hbase_text.txt:0+739
2016-01-26 10:58:00,570 INFO [main] org.apache.pig.data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
2016-01-26 10:58:00,657 INFO [main] org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map: Aliases being processed per job phase (AliasName[line,offset]): M: raw_data[1,11],raw_data[-1,-1] C: R:
2016-01-26 10:58:00,713 WARN [main] org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigHadoopLogger: org.apache.pig.builtin.Utf8StorageConverter(FIELD_DISCARDED_TYPE_CONVERSION_FAILED): Unable to interpret value [32, 97, 103, 101] in field being converted to int, caught NumberFormatException <For input string: "age"> field discarded
2016-01-26 10:58:00,730 INFO [main] org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x251c236ef7b0093
2016-01-26 10:58:00,733 INFO [main] org.apache.zookeeper.ZooKeeper: Session: 0x251c236ef7b0093 closed
2016-01-26 10:58:00,733 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn: EventThread shut down
2016-01-26 10:58:00,735 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:947)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:655)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:285)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:278)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2016-01-26 10:58:00,747 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Pig
01-27-2016
04:01 AM
Thanks , i will try setting those MR properties through hive . Below is the MR framework counters screen shot .
... View more