Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 546 | 06-04-2025 11:36 PM | |
| 1083 | 03-23-2025 05:23 AM | |
| 558 | 03-17-2025 10:18 AM | |
| 2098 | 03-05-2025 01:34 PM | |
| 1310 | 03-03-2025 01:09 PM |
11-05-2019
12:38 PM
@sow I am not using hive-import or trying to create a hive table. but below is your code I can see target -dir Thanks
... View more
11-05-2019
12:09 PM
@iamabug I think it's a misconfiguration can you see the differences between these 2? The one in Black and BOLD is your current remove the ( = ) and replace it with a colon ( : ) and space after listener.security.protocol.map=SASL_PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT [old] listener.security.protocol.map: SASL_PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT [New] Restart the brokers and let me know if you still encounter the problem
... View more
11-05-2019
10:20 AM
@m4x1m1li4n I am wondering why your zookeeper is running on port 4181?? As shown in the log. Please check that the default zk port is 2181 after sorting that out restart the zookeeper ensemble 2019-11-05 16:05:48,449 WARN org.apache.zookeeper.server.quorum.QuorumCnxManager: Cannot open channel to 2 at election address slave3.sysdatadigital.it/13.53.62.160:4181 java.net.ConnectException: Connection refused (Connection refused) The default port for zookeeper is 2181 I have attached a screenshot of my zk see attached even if I had an assemble I would still have usually an FQDN:2181 slave3.sysdatadigital.it:2181,slave1.sysdatadigital.it:2181,slave2.sysdatadigital.it:2181 Can you ensure your Zookeeper assemble is up and running, down in the log too it seems you don't have an odd number of zookeepers in your case at least 3 Zookeepers to avoid the split-brain decision Happy hadooping
... View more
11-05-2019
09:00 AM
1 Kudo
@Harpreet_Singh No you will be fine Centos7 or RHEL no constraints just follow the official environment preparation instructions and dont overlook any point and I am sure you will be okay.
... View more
11-05-2019
08:36 AM
1 Kudo
@neha_t_jain As your company has stringent regulations about the Java version,unfortunately even HDP 3.1 hasn't been certified against java 11,my best bet if to take the golden opportunity to propose the deployment of the HDP cluster in the cloud AWS or Azure, and try to sell the advantage on the cloud offerings, this way the environment is isolated and the network admins can create VDI [Virtual desktops] in a specific VLAN or VM's in the cloud so user's access it through RDP remote desktop and only when HDP/CDP is certified against java 11 then access can be given to users in the corporate network. I am assuming you have only select users who are going to use the HDP cluster. HTH
... View more
11-04-2019
02:45 PM
@ There are so many moving parts in your config to help investigate could you share the below files, you should redact site-specific info. Apart from the info already given can you share your architecture? HDP version,Cluster size,zookeeper and Kafka logs. zookeeper_jaas.conf kafka_server_jaas.conf zookeeper.properties server.properties Are all other kerberized component functioning normally Please revert
... View more
11-04-2019
02:28 PM
1 Kudo
@Harpreet_Singh You are talking of hosts yet you are installing a single node cluster? If you have more than one node all the same nos issues to avoid the tricky setup of passwordless ssh run the following on all host including the Ambari server My assumptions You have successfully installed Ambari and it's running on port 8080 OS=Centos7/RHEL7 [Same steps and procedure] Database= MySQL [Already installed and running] root password= welcome1 [Replace with your current MySQL root password] Ambari_host=tokyo.com [please replace with the output of $ hostname -f ] Execute the below as root on all the hosts, first ensure that your /etc/host entry is the same on all host with the format if you have 4 hosts all the /etc/host entries should look like below IP FQDN ALIAS 192.168.0.10 host1.com host1 192.168.0.13 host2.com host2 192.168.0.14 host3.com host3 192.168.0.16 host4.com host4 Install the Ambari-agent on all the hosts # yum repolist # yum install ambari-agent Edit the /etc/ambari-agent/conf/ambari-agent.ini on all hosts edit the server entry and replace the hostname with the FQDN of the Ambari server save and exit [server] hostname= tokyo.com [Your Ambari host] url_port=8440 secured_url_port=8441 connect_retry_delay=10 max_reconnect_retry_delay=30 Start the agent # ambari-agent start Validate the Ambari-agent status # ambari-agent status It should be running, do that above on all hosts The components that are failing to install all need a database to hold the objects like Hive Metastore. It's very simple and here we go. I am assuming you have installed the MySQL database if you already as you have Ambari running. Make sure you enable MySQL to startup at boot # systemctl enable mysqld # systemctl start mysqld The below script can be used to create hive, oozie, ranger and rangerkms database plus users. You can simply use a notepad to replace for example all occurrences of hive with ranger then after with oozie etc. You will notice for simplicity I have used the same name for the database, user and password just copy and paste every time as this is the first installation you can harden the security when you become familiar with the process ################################## # As root user ...... Hive,,oozie,ranger and rangerkms ################################### mysql -u root -pwelcome1 CREATE database hive; CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost'; CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%'; CREATE USER 'hive'@'tokyo.com' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'tokyo.com'; FLUSH PRIVILEGES; quit; ############################################################ # Create the database for the Druid and Superset metastore ########################################################### mysql -u root -pwelcome1 CREATE DATABASE druid DEFAULT CHARACTER SET utf8; CREATE DATABASE superset DEFAULT CHARACTER SET utf8; CREATE USER 'druid'@'%' IDENTIFIED BY 'druid'; CREATE USER 'superset'@'%' IDENTIFIED BY 'superset'; GRANT ALL PRIVILEGES ON *.* TO 'druid'@'%' WITH GRANT OPTION; GRANT ALL PRIVILEGES ON *.* TO 'superset'@'%' WITH GRANT OPTION; commit; quit; After completing the above you should be having hive, oozie, ranger and rangerkms, druid and superset databases ready. From the Ambari UI the correct values should be picked you can test the connection they should succeed, note the password will be blanked out that's normal see attached screenshot use existing databases not NEW !! Once all you have validated the above you can now fire up the start all components from Ambari the above components should start successfully. Please revert HTH
... View more
11-04-2019
10:09 AM
1 Kudo
@Harpreet_Singh Of course, you have bought support Cloudera Engineers are ready to help you Cloudera Support at your service. There is a dedicated team of Cloudera professionals that could help you in all aspects from architecting to the deployment of your environment implementing the best practices let alone help you manage and eventually do knowledge transfer if you plan to manage your clusters internally having said that the Cloudera community is there to help you acquire some technical knowledge to sort out technical problems in you Dev or environments without critical data. Can you share the issues you encountered during your single node installation? It should be easy and straight forward. happy hadooping
... View more
11-04-2019
07:27 AM
1 Kudo
@sow Have you tried changing your --target-dir /user/database/test/ --m 1 $ sqoop import -D yarn.app.mapreduce.am.staging-dir=/user/test/ --driver "com.microsoft.sqlserver.jdbc.SQLServerDriver" --connect "jdbc:sqlserver://ip:port;database=database;" --connection-manager "org.apache.sqoop.manager.SQLServerManager" --username <username> -password <password> --table 'tablename' --as-parquetfile --delete-target-dir --target-dir /user/test/ --m 1 While running hive import target-dir argument value controls where the data needs to store temporarily before loading into Hive table, but target-dir doesn't create hive table in that location. If you want to import to specific directory then use target-dir without hive-import argument and create hive table on top of HDFS directory. HTH
... View more
11-03-2019
11:15 PM
@Ani73 I didn't see you locking the firewall rule to your IP as suggested. and you should open * (all) ports namenode,ranger etc so the best option is a ranger of ports eg 80-9000 or just * all ports which is a better option f you lock it to your IP Please do that and revert
... View more