Member since
07-11-2016
25
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2554 | 08-21-2017 04:57 AM | |
2628 | 08-17-2017 01:13 AM |
10-23-2017
10:30 AM
Hi, When we try to run the spark-submit on our kerberos cluster, we get below mentioned error. spark-submit -v \ --master yarn-client \ --jars $myDependencyJarFiles \ --conf spark.default.parallelism=12 \ --conf spark.driver.memory=4g \ --conf spark.executor.memory=4g \ --conf spark.yarn.security.tokens.hive.enabled=false \ --conf spark.yarn.security.tokens.hbase.enabled=false \ --conf spark.yarn.executor.memoryOverhead=2048 \ --conf spark.kryoserializer.buffer.max=512m \ --conf spark.yarn.appMasterEnv.MASTER=yarn-client \ --conf spark.yarn.appMasterEnv.HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop \ --conf spark.yarn.appMasterEnv.HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce \ --conf spark.hadoop.fs.hdfs.impl.disable.cache=true \ --keytab user.keytab \ --principal user@abc.domain.com --class com.test.spark.spark.testclass \ ERROR: y ERROR - 17/10/23 15:54:48 WARN yarn.ExecutorDelegationTokenUpdater: Error while trying to update credentials, will try again in 1 hour 23-10-2017 15:54:48 GMT test-hourly ERROR - 17/10/23 15:54:48 ERROR util.Utils: Uncaught exception in thread main 23-10-2017 15:54:48 GMT test-hourly ERROR - java.lang.StackOverflowError Also, I have tried by running the spark-submit with principle & keytab and without principle & keytab, but the error was same. Can anybody please help me on the same, if the error is related with having a client mode on kerberos cluster or if you can help me on the possible fix. Thanks, Amit
... View more
09-19-2017
12:53 PM
Hi, I ran kinit superuser who has access to hdfs folders and then I have created various level roles in Hive from beeline on my AD based kerberos cluster. I can see the roles in Hive and Impala, when I run the command "show current roles". I have created roles using commands like; GRANT ROLE cli_selectonly_su to GROUP superuser; GRANT SELECT ON DATABASE MyDB TO ROLE cli_selectonly_su; 1] Based upon this if I run commands like show tables in hive via beeline it works fine and if I set specific role for that hive session then also, it works fine. In Impala shell, I can see all the roles which I had created in hive shell, but I can not set any specific roles to impala session and it seems to consider all the hive created roles like ReadonlyRole,ReadWriteRole. How to restrict any user in impala to specific permissions? 2] Now, when I connect to system using different kerberos user i.e. kinit anotheruser and then if I goes to impala-shell -k and tries to run any command like show tables it gives me below error. ERROR: AuthorizationException: User 'anotheruser@REALM' does not have privileges to access: dbanem.* Requesting to help me on the same or correct me If my understanding/implementation is not correct. Thank you, Amit
... View more
- Tags:
- Impala sentry
09-11-2017
12:40 PM
Hi, I have a cluster, where I have already implemented the TLS Level 3 encryption as well as Kerberos authetication is enabled using AD server. When I try to enable the Impala encryption as per the link "https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_ssl.html", I started getting errors in Impala. I am really stuck on the same and I would be really thankful if anybody can help me on the same. Errors; 1] Sep 11, 7:01:24.332 PM INFO thrift-util.cc:111 TAcceptQueueServer: Caught TException: invalid sasl status Sep 11, 7:01:24.332 PM INFO thrift-util.cc:111 SSL_shutdown: error code: 0 2] In impalad.INFO - Couldn't open transport for server:24000 (SSL_get_verify_result(), unable to get local issuer certificate) 3] RPC Error: No more data to read. LOGS: /statestored.INFO Sep 11, 7:01:17.342 PM INFO authentication.cc:497 Registering impala/host@REALM, keytab file /var/run/cloudera-scm-agent/process/2375-impala-STATESTORE/impala.keytab Sep 11, 7:01:17.843 PM INFO authentication.cc:803 Kerberos ticket granted to impala/host@REALM Sep 11, 7:01:17.843 PM INFO authentication.cc:681 Using external kerberos principal "impala/host@REALM" Sep 11, 7:01:17.843 PM INFO authentication.cc:1033 External communication is authenticated with Kerberos Sep 11, 7:01:17.844 PM INFO init.cc:204 statestored version 2.7.0-cdh5.9.1 RELEASE (build 24ad6df788d66e4af9496edb26ac4d1f1d2a1f2c) Built on Wed Jan 11 13:39:25 PST 2017 Sep 11, 7:01:17.844 PM INFO init.cc:205 Using hostname: host Sep 11, 7:01:17.845 PM INFO logging.cc:156 Flags (see also /varz are on debug webserver): --catalog_service_port=26000 --load_catalog_in_background=false --num_metadata_loading_threads=16 --sentry_config= --asm_module_dir= --disable_optimization_passes=false --dump_ir=false --opt_module_dir= --perf_map=false --print_llvm_ir_instruction_count=false --unopt_module_dir= --abort_on_config_error=true --be_port=22000 --be_principal= --compact_catalog_topic=false --disable_kudu=false --disable_mem_pools=false --enable_accept_queue_server=true --enable_process_lifetime_heap_profiling=false --heap_profile_dir= --hostname=host --keytab_file=/var/run/cloudera-scm-agent/process/2375-impala-STATESTORE/impala.keytab --krb5_conf= --krb5_debug_file= --load_auth_to_local_rules=false --max_minidumps=9 --mem_limit=80% --minidump_path=/var/log/impala-minidumps/statestored --minidump_size_limit_hint_kb=20480 --principal=impala/host@REALM --redaction_rules_file= --max_log_files=10 --pause_monitor_sleep_time_ms=500 --pause_monitor_warn_threshold_ms=10000 --log_filename=statestored --redirect_stdout_stderr=true --data_source_batch_size=1024 --exchg_node_buffer_size_bytes=10485760 --enable_partitioned_aggregation=true --enable_partitioned_hash_join=true --enable_probe_side_filtering=true --enable_quadratic_probing=true --skip_lzo_version_check=false --parquet_min_filter_reject_ratio=0.10000000000000001 --max_row_batches=0 --runtime_filter_wait_time_ms=1000 --suppress_unknown_disk_id_warnings=false --kudu_max_row_batches=0 --kudu_scanner_keep_alive_period_us=15000000 --kudu_scanner_keep_alive_period_sec=15 --kudu_scanner_timeout_sec=60 --pick_only_leaders_for_tests=false --kudu_session_timeout_seconds=60 --convert_legacy_hive_parquet_utc_timestamps=false --max_page_header_size=8388608 --enable_phj_probe_side_filtering=true --accepted_cnxn_queue_depth=10000 --enable_ldap_auth=false --internal_principals_whitelist=hdfs --kerberos_reinit_interval=60 --ldap_allow_anonymous_binds=false --ldap_baseDN= --ldap_bind_pattern= --ldap_ca_certificate= --ldap_domain= --ldap_manual_config=false --ldap_passwords_in_clear_ok=false --ldap_tls=false --ldap_uri= --sasl_path= --rpc_cnxn_attempts=10 --rpc_cnxn_retry_interval_ms=2000 --disk_spill_encryption=false --insert_inherit_permissions=false --datastream_sender_timeout_ms=120000 --max_cached_file_handles=0 --max_free_io_buffers=128 --min_buffer_size=1024 --num_disks=0 --num_remote_hdfs_io_threads=8 --num_s3_io_threads=16 --num_threads_per_disk=0 --read_size=8388608 --backend_client_connection_num_retries=3 --backend_client_rpc_timeout_ms=300000 --catalog_client_connection_num_retries=3 --catalog_client_rpc_timeout_ms=0 --catalog_service_host=localhost --cgroup_hierarchy_path= --coordinator_rpc_threads=12 --enable_rm=false --enable_webserver=true --llama_addresses= --llama_callback_port=28000 --llama_host= --llama_max_request_attempts=5 --llama_port=15000 --llama_registration_timeout_secs=30 --llama_registration_wait_secs=3 --num_hdfs_worker_threads=16 --resource_broker_cnxn_attempts=1 --resource_broker_cnxn_retry_interval_ms=3000 --resource_broker_recv_timeout=0 --resource_broker_send_timeout=0 --staging_cgroup=impala_staging --state_store_host=localhost --state_store_subscriber_port=23000 --use_statestore=true --s3a_access_key_cmd= --s3a_secret_key_cmd= --local_library_dir=/tmp --serialize_batch=false --status_report_interval=5 --max_filter_error_rate=0.75 --num_threads_per_core=3 --use_local_tz_for_unix_timestamp_conversions=false --scratch_dirs=/tmp --queue_wait_timeout_ms=60000 --max_vcore_oversubscription_ratio=2.5 --rm_mem_expansion_timeout_ms=5000 --rm_always_use_defaults=false --rm_default_cpu_vcores=2 --rm_default_memory=4G --default_pool_max_queued=200 --default_pool_max_requests=-1 --default_pool_mem_limit= --disable_pool_max_requests=false --disable_pool_mem_limits=false --fair_scheduler_allocation_path= --llama_site_path= --require_username=false --disable_admission_control=false --log_mem_usage_interval=0 --authorization_policy_file= --authorization_policy_provider_class=org.apache.sentry.provider.common.HadoopGroupResourceAuthorizationProvider --authorized_proxy_user_config= --authorized_proxy_user_config_delimiter=, --load_catalog_at_startup=false --server_name= --abort_on_failed_audit_event=true --abort_on_failed_lineage_event=true --audit_event_log_dir= --be_service_threads=64 --beeswax_port=21000 --cancellation_thread_pool_size=5 --default_query_options= --fe_service_threads=64 --hs2_port=21050 --idle_query_timeout=0 --idle_session_timeout=0 --lineage_event_log_dir= --local_nodemanager_url= --log_query_to_file=true --max_audit_event_log_file_size=5000 --max_lineage_log_file_size=5000 --max_profile_log_file_size=5000 --max_profile_log_files=10 --max_result_cache_size=100000 --profile_log_dir= --query_log_size=25 --ssl_client_ca_certificate=/opt/cloudera/security/x509/ssl_wildcard_full.pem --ssl_private_key=/opt/cloudera/security/x509/ssl_wildcard.key --ssl_private_key_password_cmd=/var/run/cloudera-scm-agent/process/2375-impala-STATESTORE/altscript.sh sec-0-ssl_private_key_password_cmd --ssl_server_certificate=/opt/cloudera/security/x509/ssl_wildcard_full.pem --statestore_subscriber_cnxn_attempts=10 --statestore_subscriber_cnxn_retry_interval_ms=3000 --statestore_subscriber_timeout_seconds=30 --state_store_port=24000 --statestore_heartbeat_frequency_ms=1000 --statestore_heartbeat_tcp_timeout_seconds=3 --statestore_max_missed_heartbeats=10 --statestore_num_heartbeat_threads=10 --statestore_num_update_threads=10 --statestore_update_frequency_ms=2000 --statestore_update_tcp_timeout_seconds=300 --force_lowercase_usernames=false --num_cores=0 --web_log_bytes=1048576 --non_impala_java_vlog=0 --periodic_counter_update_period_ms=500 --enable_webserver_doc_root=true --webserver_authentication_domain= --webserver_certificate_file=/opt/cloudera/security/x509/ssl_wildcard_full.pem --webserver_doc_root=/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/lib/impala --webserver_interface= --webserver_password_file= --webserver_port=25010 --webserver_private_key_file=/opt/cloudera/security/x509/ssl_wildcard.key --webserver_private_key_password_cmd=/var/run/cloudera-scm-agent/process/2375-impala-STATESTORE/altscript.sh sec-0-webserver_private_key_password_cmd --webserver_x_frame_options=DENY --flagfile=/var/run/cloudera-scm-agent/process/2375-impala-STATESTORE/impala-conf/state_store_flags --fromenv= --tryfromenv= --undefok= --tab_completion_columns=80 --tab_completion_word= --help=false --helpfull=false --helpmatch= --helpon= --helppackage=false --helpshort=false --helpxml=false --version=false --alsologtoemail= --alsologtostderr=false --drop_log_memory=true --log_backtrace_at= --log_dir=/var/log/statestore --log_link= --log_prefix=true --logbuflevel=0 --logbufsecs=30 --logemaillevel=999 --logmailer=/bin/mail --logtostderr=false --max_log_size=200 --minloglevel=0 --stderrthreshold=4 --stop_logging_if_full_disk=false --symbolize_stacktrace=true --v=1 --vmodule= Sep 11, 7:01:17.845 PM INFO init.cc:212 Physical Memory: 125.93 GB Sep 11, 7:01:17.845 PM INFO webserver.cc:247 Document root: /opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/lib/impala Sep 11, 7:01:18.128 PM INFO webserver.cc:331 Webserver started Sep 11, 7:01:18.134 PM INFO statestored-main.cc:85 Enabling SSL for Statestore Sep 11, 7:01:18.399 PM INFO thrift-server.cc:391 Command '/var/run/cloudera-scm-agent/process/2375-impala-STATESTORE/altscript.sh sec-0-ssl_private_key_password_cmd' executed successfully, .PEM password retrieved Sep 11, 7:01:18.400 PM INFO thrift-server.cc:449 ThriftServer 'StatestoreService' started on port: 24000s Sep 11, 7:01:24.332 PM INFO thrift-util.cc:111 SSL_shutdown: error code: 0 Sep 11, 7:01:24.332 PM INFO thrift-util.cc:111 TAcceptQueueServer: Caught TException: invalid sasl status Please help. Thank you, Amit
... View more
- Tags:
- Impala encryption
Labels:
08-31-2017
04:24 AM
I do have same issue, on Kerberos enalbed cluster Impala-shell works fine on one machine but gives the same TTransportException error on another machine of the cluster, can anybody please help on the same.
... View more
08-24-2017
01:03 PM
Hi, I am getting below error in hostmonitor log; Monitor-HostMonitor throttling_logger ERROR ntpdc: ntpdc -np: not synchronized to any server Running ntpstat gives below message, which seems to be synchronized. ntpstat synchronised to unspecified at stratum 11 time correct to within 61 ms polling server every 64 s Can anybody please help me out on the same. Thanks, Amit
... View more
- Tags:
- HostMonitor
Labels:
08-21-2017
06:19 AM
Hi, Can anybody please help me on the same. Thanks, Amit
... View more
08-21-2017
04:57 AM
I am able to resolve this issue by setting the verify_cert_dir in /etc/cloudera-scm-agent/config.ini I was missing the root certificate file, which I had download from CA authority and added to the verify_cert_dir. Also, I had executed below command to verify the same. openssl verify -verbose -CAfile <(cat cert_intermediate_ca.pem thawte_root_ca.pem) hostname.pem It gave me message: hostname.pem: OK Thanks, Amit
... View more
08-21-2017
04:53 AM
Hi, I am trying to configure the TLS level 3 encryption using method B from below link; https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_sg_config_tls_agent_auth.html I alreday have a wildcard based public CA. Question: 1] Do I need to generate the CSR request on each of the host and send the CSR to CA i.e. if I have 10 datanode then do I need to create 10 i.e. csr per node OR Since, I have the wildcard public CA, I am not required to do the same and can directly jump to "Step 5: Export the private key from the Java keystore and convert it with OpenSSL for reuse by Agent" of the documentation? Can you please confirm the same, I would be realy thankful for the same. Thank you, Amit
... View more
- Tags:
- TLS Encryption
Labels:
08-17-2017
01:21 AM
Hi All, I had implemented the Level 1 TLS encryption and which is working. But, when I have implemented the Level 2 TLS encryption as per the steps given in below link https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_sg_config_tls_auth.html#topic_3 I have started getting below error. 1. In cloudera-scm-agent log [17/Aug/2017 07:24:50 +0000] 31094 MainThread agent ERROR Heartbeating to c018-srv1.e8sec.com:7182 failed. Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.9.1-py2.6.egg/cmf/agent.py", line 1346, in _send_heartbeat self.max_cert_depth) File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.9.1-py2.6.egg/cmf/https.py", line 132, in __init__ self.conn.connect() File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/M2Crypto-0.21.1-py2.6-linux-x86_64.egg/M2Crypto/httpslib.py", line 50, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/M2Crypto-0.21.1-py2.6-linux-x86_64.egg/M2Crypto/SSL/Connection.py", line 185, in connect ret = self.connect_ssl() File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/M2Crypto-0.21.1-py2.6-linux-x86_64.egg/M2Crypto/SSL/Connection.py", line 178, in connect_ssl return m2.ssl_connect(self.ssl) SSLError: certificate verify failed 2. In Cloudera-scm-Server Log 2017-08-17 07:51:04,118 WARN 118674289@agentServer-169:org.mortbay.log: javax.net.ssl.SSLException: Received fatal alert: unknown_ca I have tried by using verify_cert_file as well as by using verify_cert_dir. Can anybody please help me on the same, if I am missing something or anything else needed to be done to fix this issue. I would be really thankful for any help on the same. Thank you, Amit
... View more
- Tags:
- TLS Encryption
Labels:
08-17-2017
01:13 AM
This issue resolved for me when I rebooted my CM machine. Thanks, Amit
... View more
08-16-2017
11:09 AM
Thanks mbigelow, Yes, I have added the public CA certificate to keystore and I have given the user cloudera-scm a full permission on the keystore files like cacerts,jsscacerts, pki folder, x509 folder and jks folder. I have validated the certificate using commands; openssl s_client -showcerts -connect hostname:443 And keytool -list -v -keystore cacerts --alias I have also validated that in the cloudera agent process file /var/run/cloudera-scm-agent/process/1653-cloudera-mgmt-SERVICEMONITOR/cmon.conf I can see some of the ssl entries as per below; <property> <name>scm.server.url</name> <value>https://hostname:7183</value> </property> <property> <name>com.cloudera.enterprise.ssl.client.truststore.location</name> <value>/usr/java/jdk1.7.0_67-cloudera/jre/lib/security/cacerts</value> </property> <property> <name>com.cloudera.enterprise.ssl.client.truststore.password</name> <value>changeit</value> </property> Regarding your point " correct hostname in certificate" do I need to verify anything else, apart from what I mentioned above. Also, I would be really thankful if you can suggest, what else I can do to fix these errors. Thanks, Amit
... View more
08-16-2017
08:38 AM
Hi, I have enabled the TLS level 1 encryption and after the same I am getting few errors in my log as per below; 1] Getting below error in My cloudera-scm-server.log 2017-08-16 14:56:56,261 INFO MainThread:com.cloudera.server.cmf.WebServerImpl: Cipher suite TLS_EMPTY_RENEGOTIATION_INFO_SCSV found. Allowing SSL/TLS renegotiations. 2017-08-16 14:56:56,288 INFO MainThread:com.cloudera.server.cmf.WebServerImpl: TLS web connections will use port: 7183 2017-08-16 14:56:56,292 INFO MainThread:com.cloudera.server.cmf.WebServerImpl: Plaintext web connections will use port: 7180 2017-08-16 14:56:56,337 INFO MainThread:com.cloudera.cmf.service.ServiceHandlerRegistry: Executing command GenerateCredentials BasicCmdArgs{args=[]}. 2017-08-16 14:56:56,337 INFO MainThread:com.cloudera.server.cmf.Main: Generating credentials (command 4481) at startup 2017-08-16 14:56:56,393 INFO WebServerImpl:com.cloudera.enterprise.JavaMelodyFacade: No JavaMelody class net.bull.javamelody.SessionListener: net.bull.javamelody.SessionListener 2017-08-16 14:56:56,479 ERROR ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloaderImpl: Unable to retrieve remote parcel repository manifest java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused to http://serverip:8000/manifest.json at com.ning.http.client.providers.netty.NettyResponseFuture.abort(NettyResponseFuture.java:297) at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:104) at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:399) at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:390) at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:352) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:409) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused to http://serverip:8000/manifest.json at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100) ... 11 more Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:404) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282) ... 3 more 2017-08-16 15:31:35,624 INFO 1922557741@scm-web-39:com.cloudera.server.web.cmf.AuthenticationFailureEventListener: Authentication failure for user: '__cloudera_internal_user__mgmt-EVENTSERVER-bdec96eb8ea18d0be431197fa05f0a3b' from CMhost 2] Getting below error in my cloudera-scm-agent.log ERROR Heartbeating to CMhostname:7182 failed. Connection refused Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.9.1-py2.6.egg/cmf/agent.py", line 1346, in _send_heartbeat self.max_cert_depth) File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/cmf-5.9.1-py2.6.egg/cmf/https.py", line 132, in __init__ self.conn.connect() File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/M2Crypto-0.21.1-py2.6-linux-x86_64.egg/M2Crypto/httpslib.py", line 50, in connect self.sock.connect((self.host, self.port)) File "/usr/lib64/cmf/agent/build/env/lib/python2.6/site-packages/M2Crypto-0.21.1-py2.6-linux-x86_64.egg/M2Crypto/SSL/Connection.py", line 181, in connect self.socket.connect(addr) File "<string>", line 1, in connect error: [Errno 111] Connection refused ERROR [1646-cloudera-mgmt-HOSTMONITOR] Failed to update 3] In Eventserver log file 2017-08-16 13:28:30,475 ERROR com.cloudera.cmf.eventcatcher.server.EventCatcherService: Error starting EventServer org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/cloudera-scm-eventserver/v3/write.lock Can anybody please help me on the same, as I am not able to find out the proper solution for the same. Thank you in advance. Thanks, Amit
... View more
Labels:
08-13-2017
12:32 PM
Hi, Below is my scenario. 1] We already have trusted public CA certificate. 2] We have a new cluster where we want to implement the kerberos and as a pre-requisite enabling the TLS encryption. 3] Since, cluster is new created the cacerts using command as per below on CM host under /usr/java/jdk1.7.0_67-cloudera/jre/lib/security/. keytool -keystore cacerts -importcert -alias aliasname -file /certificatepath/cert_wildcard_full.pem 4] Used the steps mentioned in below link; https://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_create_deploy_certs.html From above link Followed - step[1],step[2] From above link Not Followed - Since, already have the trusted public CA certificate hence not followed the step[3],step[4],step[5] [Please correct me, if I am wrong] And then followed step[6] i.e. Import the Certificate into the Keystore 5] After doing this followed below link steps. https://www.cloudera.com/documentation/enterprise/latest/topics/sg_cms_ssl.html 6] I have also copied the $JAVA_HOME/jre/lib/security/jssecacerts to all nodes on the cluster and created the cacert on all nodes of the cluster using same command given above in #3. However, when I starts the cloudera server I am getting below error consistently; WARN 1671856335@scm-web-4:org.mortbay.log: javax.net.ssl.SSLHandshakeException: no cipher suites in common Can anybody please help me on this, I have tried various things but not able to fix it and due to this CM is not working. If I am missing anything in above steps or doing anything wrong in above steps can you please confirm what would be the right steps to enable the TLS for trusted public CA. Thank you very much in advance. Really looking out for any assistance on the same. Thanks, Amit
... View more
Labels:
04-02-2017
10:54 PM
Hi, It's been sometime that Kudu 1.3.0 released, for which we were eagrly waiting as it has security features which will help to have Kudu in production with security implemented. I can understand that it would be difficult for you to give the exact date, but any timeline will certainly help us. Can you please suggest, when can we expect the Kudu 1.3.0 parcel? Thank you, Amit
... View more
Labels:
03-30-2017
02:50 AM
Thanks Lars for your help. On Impala DELETE, is their any specific reason to stop DELETE on Impala tables? May be I'm missing something, but as I have the kudu service installed on my cluster. In cloudera Impala configuration, what is the difference between setting kudu service or selecting none as in both cases kudu queries works fine. Thank you for your help. Thanks, Amit
... View more
03-29-2017
06:00 AM
Hi,
I had a cluster[CDH 5.8.2] in which I was using Impala and Kudu.
Impala Parcel is downloaded from - http://archive.cloudera.com/beta/impala-kudu/parcels/latest/
I have upgraded this cluster to CDH 5.10 with cloudera manager 5.10.
Now, running the select verison() query on this upgraded cluster in Impala gives me below details;
impalad version 2.7.0-cdh5.10.0 RELEASE
However, in CDH 5.10 upgrade they mentioned the support for Imapal 2.8. I can not find the parcel for the Impala 2.8.
Also, running the delete command on Impala table give me below error.
"ERROR: AnalysisException: Impala does not support modifying a non-Kudu table: default.impala_testtable"
Questions:
1. Can anybody suggest me how can I upgrade to Impala 2.8? Is there any parcel for the same or the one which I'm currently using is the latest?
2. As running delete command on Impala table gives me the error what is the alternative to delete data from existing impala table? However, the delete command works fine with Kudu tables.
Can anybody please help me on the same.
Thanks,
Amit
... View more
01-09-2017
03:26 AM
Thanks a lot mbigelow. Ldap auth worked for me. I was using openLdap ldap_baseDN like cn=test,ou=xyz,dc=impalabigdata,dc=net, when I changed it as ou=xyz,dc=impalabigdata,dc=net and used the JDBC connection as jdbc:hive2://server:21050/default;","test","ldappwd", it worked for me. Thanks a lot for providing a complete details, as currently these things didn't get clarified in existing documentation. Thanks, Amit
... View more
01-07-2017
09:19 AM
Hi, I need to implement the JDBC connection for Impala with username and password[I guess here username=host user on which Impala daemon is running and Password=corresponding user password] for our client cluster. As per my understanding we do not need to make any specific change in Impala server configs or on server for the same. What I tried: 1. Cluster is without Kerberos. 2, Tried two types of drivers. a] Driver Type 1] JDBC:hive2 - When I uses connection string; DriverManager.getConnection("jdbc:hive2://server:21050/default;","hostuser","password"). It does not work at all and ends in connection timeout. When I uses connection string; DriverManager.getConnection("jdbc:hive2://server:21050/default;auth=noSasl;") It works and gets connected to impala. b] Driver Type 2] JDBC:impala - When I uses connection string; DriverManager.getConnection("jdbc:impala://Server:21050/default;AuthMech=3;UID=hostuser;PWD=password") It does not work at all and ends in connection timeout. When I uses connection string; DriverManager.getConnection("jdbc:impala:// Server :21050/default;AuthMech=0;" ) It works and gets connected to impala. Questions: 1. For Implementing the Impala JDBC username & password based connection, is it mandatory to have either Kerberos or LDAP or Sentry Implented? 2. If not, then using JDBC with Impala local user credentials does need any additional parameters or change at the server level or change in Impala local user permission? 3. Can you please provide any example to implement it, like any server level or client level changes if my implementation is wrong. Thank you in advance. Amit
... View more
Labels:
09-13-2016
04:34 AM
Hi, Whenever I try to install the cloudera using PATH - B, I always face an issue that the cloudera manager cluster setup does not create any of the directories like Namenode Data Directories, Nodemanager Local Directories, Datanode Data Directories, Hive warehouse directory etc. and due to which the setup fails. As a workaround, I need to create these directories manually on cluster with specified directory user and then, when I run the setup it works fine. Can anybody please suggest what could be the reason that the cloudera manager cluster setup is not creating these directories automatically. Thanks, Amit
... View more
08-11-2016
05:24 AM
Hi Team Cloudera, In above case Issue #2 resolved. However, In case of, Issue #1 we have observed that while installation although we have selected custom Database and provided the custom database details[postgresql] on instllation page, the cloudera is using the default embedded database for oozie,hue,hive and report manager, as we were able to fix this issue #1 when we provided the credentials from db.mgmt.properties which was having embedded DB credentials and we found that the oozie,hue,hive and report manager DBs are created into embedded DB. This is not expected since we have provided the custom database details and it seems that cloudera manager is not using the provided custom database details. Can you please help us on the same, as this is not only an installation issue but this is also puzzling what we can do in case of, back up of cloudera manager. Thank you, Amit
... View more
08-03-2016
01:01 PM
Hi, We are facing two issues as per below; 1] We have setup cloudera manager(5.8.1) and CDH (5.8.0) using Installation Path A method. However, we have used custom postgres[9.4] database instead of Embedded Database. We were required to provide username,database and password for oozie,hue,hive and Report manager while setup. Except, Report manager all other connection succeded. For report manager having username/database as 'rman' it was giving an error as "Logon denied for user/password. Able to find the database server and database, but logon request was rejected." Although, we are able to connect to the same custom db from terminal with same credentials. When we changed the username/database otherthan rman e.g. rman2[we created this db] the testconnection succeded. But, now in cloudera manager service, we are getting Process Status error for report manager, which is giving an error in log as "com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@71c0aadc -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (5). Last acquisition attempt exception: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "rman2" 2] When we downloaded,distributed and activated kudu parcels, latest or kudu 0.9.0 from cloudera manager and tried to add it from 'add a service' option, it did not shows us Kudu service option. Can you please help us on the same. Really appriciate any help on the same. Thanks, Amit
... View more
07-28-2016
02:05 AM
Hi, We will be creating a Hadoop cluster with Edge node using cloudera manager 5.8. We will setup spark gateway, Impala Daemon, HDFS Gateway, Hive gateway and Hue Server on Edge node to communicate with master nodes. We have a query related to Kafka, We have setup Kafka Brokers on 3 servers but not on Edge node as it will be wrong in principal. How can we setup kafka client libs on edge node as we will be running the consumer from Edge node as well as will run the adhoc kafka commands like[/usr/bin/kafka-topics --list --zookeeper IP:2181] . Do we need to simply copy the kafka files[e.g. kafka-topics, kafka-console-consumer] under /usr/bin/ to the edge node at same location? Can you please help us on the same. Thanks, Amit
... View more
Labels:
07-13-2016
02:28 AM
Thank you Vinithra. However, any ided that how soon the kudu support will be available in cloudera director?
... View more
07-11-2016
10:48 AM
Hi, We are using cloudera director 2.1 and trying to install the CM 5.7.1, along with the same below are some sections where we have also added Kudu Parcel reference; products { CDH: 5.7.1 KAFKA: 2.0.0 KUDU: 0.9.1 } parcelRepositories: ["http://archive.cloudera.com/cdh5/parcels/5.7.1/","http://archive-primary.cloudera.com/kafka/parcels/2.0.0/","http://archive.cloudera.com/beta/kudu/parcels/0.9.1/"] services: [HDFS, YARN, ZOOKEEPER,HIVE, HUE, OOZIE, SENTRY, SPARK_ON_YARN, KAFKA,KUDU] Along with KUDU_TSERVER & KUDU_Master Roles. There are two main error which we are getting after running command as per below; sudo cloudera-director bootstrap-remote Bigdatacluster.conf --lp.remote.username=admin --lp.remote.password=admin -- lp.remote.hostAndPort=localhost:7189 1] ClouderaManagerException{message="API call to Cloudera Manager failed. Method=RoleConfigGroupsResource.updateConfig",causeClass=class javax.ws.rs.BadRequestException, causeMessage="HTTP 400 Bad Request"} ----- Not getting any reference for this error, need help on the same. 2] Invalid role type(s) specified. Ignored during role creation: KUDU: KUDU_TSERVER,KUDU_MASTER ------ How can we setup kudu as a part of this config? Can any member please help me on the same. Thanks in advance, Amit
... View more
Labels: