Member since
01-19-2017
3598
Posts
593
Kudos Received
359
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
119 | 10-26-2022 12:35 PM | |
273 | 09-27-2022 12:49 PM | |
341 | 05-27-2022 12:02 AM | |
276 | 05-26-2022 12:07 AM | |
467 | 01-16-2022 09:53 AM |
01-17-2023
01:11 PM
@admin007 How are you trying to connect? Can yo share the error ? When you are connecting to an impalad running on the same machine the prompt will reflect the current hostname. $ impala-shell If you are connecting to an impalad running on a remote machine, and impalad is listening on a non-default port [21000] $ impala-shell -i some.other.hostname:port_number Hope that starts the conversation
... View more
01-12-2023
12:09 AM
I want to say also that node-manager restart or fully restart of yarn service fixed the problem , but as you know this isn't the right solution that should be every time that one of the node manager became die
... View more
11-17-2022
09:20 AM
I did all the suggestions and tried to follow all the steps, but when I am running command: ./kafka-console-producer.sh --broker-list host.kafka:6667 --topic cleanCsv --producer-property security.protocol=SASL_PLAINTEXT < /tmp/clean_csv_full.csv Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>[2022-11-17 17:06:35,693] WARN [Principal=null]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin) No issues with creation of topics or other, just when trying to push csv inside I am getting that error, when without Kerberos all goes good and smoothly uploads it. Any help is really very helpful, thank you in advance and looking forward to your reply
... View more
11-14-2022
04:04 AM
thanks for response,, made the changes but still getting the error
... View more
11-12-2022
01:09 PM
@hassan-ki5 This looks a typical CM database connection issue can you check and compare the entries in cat /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.type=[Oracle/mysql/postgresql] com.cloudera.cmf.db.host=localhost com.cloudera.cmf.db.name=scm com.cloudera.cmf.db.user=scm com.cloudera.cmf.db.setupType=EXTERNAL com.cloudera.cmf.db.password=scm Ensure the DB.password, name, and user are correct since you seem to be running Mysql can you check this page CM using Mysql
... View more
11-09-2022
12:13 PM
try changing hortonworks settings Networks. Update the Ambari host port (i.e. 8080) to some other port
... View more
11-06-2022
05:01 AM
Hi @varun_rathinam . Were you able to solve the above error by any chance ? This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.
... View more
11-04-2022
03:36 PM
@lysConsulting Are you using embedded DB? if not can you log in to the HUE database from the CLI?
... View more
11-03-2022
12:26 AM
Hi Shelton, We have installed apache atlas 2.2.0 in VM , also installed required component like zookeeper, solr, hbase and kafka in same VM . we have ADLS storage account , we would like integrate ADLS to atlas .. Can you please help me on this ? Thanks, Venkat
... View more
10-26-2022
12:35 PM
@drewski7 Ranger plugins that use Ranger as an authorization module will cache local policy and use the same for authorization purposes Ranger plugins cache the tags and periodically poll the tag store for any changes. When a change is detected, the plugins update the cache. In addition, the plugins store the tag details in a local cache file – just as the policies are stored in a local cache file. When the component restarts, the plugins will use the tag data from the local cache file if the tag store is not reachable. At periodic intervals , a plugin polls the Ranger Admin to retrieve the updated version of policies. The policies are cached locally by the plugin and used for access control The Policy evaluation and policy enforcement happens within the service process. The heart of this processing is the “Policy Engine”. It uses a memory resident-cached set of policies. Ranger takes 30secs to refresh policies check the "Plugin" option in the ranger UI but you can change the refresh time In Ambari UI->HDFS->Services->Configs->"Advance ranger-hdfs-security" you can change the poll interval here[refresh time]. Geoffrey
... View more
10-11-2022
07:44 PM
@Profred @ as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
10-07-2022
01:58 AM
@imule, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
09-30-2022
02:42 AM
thanks for your reply !!! I solvd the problem,This /etc/security path has only read permission, but no open write permission. After I gave write permission, the problem was solved and Ambari was able to create keyTab files here as desired. Although I set all 777 permissions for /etc/security/keytabs, nothing happened
... View more
09-21-2022
05:08 AM
@dmharshit The 2 SQL inserts are not identical the latter only updates the password while the former user and password. Prior to your update did your backup the table? Geoffrey
... View more
09-13-2022
10:56 PM
Hello @kunal_agarwal If you are using Knox Gateway, it may be the bug, presented here To fix it you could apply this changes to file: ${KNOX_GATEWAY_HOME}/data/services/yarnui/2.7.0/rewrite.xml to rewrite rules of yarnui service
... View more
09-08-2022
07:04 PM
ok, thinks,man,i have already resource this question. the way is to check some py file contain some python3 title those I add those before, think u very much
... View more
09-05-2022
02:12 AM
Dear @araujo , Many thanks , yes . This configuration is already enabled and configured. Any recommendation for troubleshooting and investigating issue for configuring kudu kerberos in cloudera 7.1.7 runtime is highly appreciated. Thanks in advance,
... View more
09-01-2022
07:30 AM
what worked for me is sqoop import --connect jdbc:mysql://localhost:3306/classicmodels --username root --password hadoop --split-by id --m 1 --table customers --hive-impo rt --driver com.mysql.jdbc.Driver just added --m 1
... View more
08-11-2022
07:50 AM
Thanks Shelton for pointing this, now I see where I was making mistake. This resolved my issue.
... View more
08-09-2022
07:17 PM
@ligzligz May I ask if you know where the reason is now, I also encountered the same problem, I can't solve it now, if you know, I hope you can help me, thank you
... View more
06-09-2022
06:10 AM
@vkotsas First, you need to establish communication between the 2 [server01 and server02] to ensure the DNS can resolve, ssh is okay and Kerberos cross-realm is working if the clusters are kerberized. Next copy hdfs-site.xml and core-site.xml from Hadoop server01 to Nifi server02 note the paths as you will need that information during the setup, in the Nifi perspective, the core-site.xml and hdfs-site.xml has all the necessary Hadoop connection information needed. Use the FetchHDFS and not GetHDFS as the latter deletes the source, this skeleton procedure should help you fine-tune your HDFS copy from server01 to Server02 please let me know if you need more help? Geoffrey
... View more
06-08-2022
04:08 AM
1 Kudo
Hi Andrea, Great to see that it has been found now and thanks for marking the post as answered. All the best, Miklos
... View more
06-06-2022
11:07 AM
@FLYs Can you have a look at this solution it could be a classpath issue. Sqoop error
... View more
05-31-2022
08:56 AM
@RZ0 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
05-30-2022
01:30 PM
What are the answers to this question since 2022? From what I'm seeing, links to repos of HDP 3.1.4 and earlier are behind a paywall aswell. Is there no free/opensource version of HDP/CDP anymore? And will never be?
... View more
05-27-2022
06:23 AM
Please make sure SSL certificates are created with the following settings https://docs.cloudera.com/cfm/2.1.4/cfm-security/topics/cfm-security-tls-certificate-requirements-recommendations.html
... View more
05-27-2022
12:07 AM
Thank you!!!
... View more
05-26-2022
12:56 AM
Thanks @Shelton for your response. I have come across this link but it is about EMR's integration. I am specifically seeking clarifications on the ranger APIs to handle the authorization of the data lying in S3. Not able to find a clear picture around this in order to begin integration with Ranger.
... View more
05-26-2022
12:07 AM
@George-Megre The Master nodes firstly are not meant for launching tasks or interaction. Edge or gateway nodes are used to run client applications and cluster administration tools. Setup of Edge/gateway node is similar to any Hadoop node except no Hadoop cluster services runs on the GW/Edge nodes they are mere entry points and connection gateway to the Master Components like HDFS (NN) HBASE etc provide you have installed the client libraries. In your case, I am sure you have the Hbase client/gateway roles on the 3 nodes and not on the master noded. The HBase client role gives the connectivity to Hbase but again I don't see why you would like to connect /initiate the HBase shell from the master node? Geoffrey
... View more