Member since
02-18-2016
121
Posts
18
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
32 | 12-02-2019 06:47 AM | |
89 | 11-28-2019 02:06 AM | |
53 | 11-28-2019 12:47 AM | |
44 | 11-25-2019 07:54 PM | |
47 | 11-22-2019 02:06 AM |
12-04-2019
01:58 AM
Hi @Ba As mentioned earlier, If you are performing copying and remounting of Filesystem then during this activity namenode must be in safe mode and not serving any kind of operation. If you add new mount to "dfs.name.data.dir", there there is no need to perform any steps only restart of hdfs will be required. No activity for datanodes.
... View more
12-04-2019
01:37 AM
Hi @Manoj690 You can clean the database and ambari will take care of this. You do not need to do it manually.
... View more
12-04-2019
01:34 AM
Hi @BaluD If this is namenode mount point then make sure the data is backed properly before you move/delete the filesystem. There will be no impact unless the mount point name is same. Just make sure while you do this activity the namenode was not functional to client. It should be stopped. I see there is other option - You can better add the new mount point in HDFS configs ->dfs.name.data.dir [as comma seperated value] Once you change the config and restart Namenode will start writing data to new mount. Once you see that all data is written to new mount comparing size with existing mount point then you can remove old mount from dfs.name.data.dir and restart hdfs Make sure you do below first - 1. hdfs dfsadmin -safemode on 2. hdfs dfsadmin -saveNamespace
... View more
12-04-2019
01:20 AM
1 Kudo
Hi @BaluD Below will be the steps 1. Lets assume your exsiting data resides in /test [which is wrong mounted on "/"] 2. Create new mount point. eg /data01 [which is /dev/vdd1] 3. Mount the disk [which is wrong mount as off now] on the new mount created in steps1 ie. /data01 4. cp data from existing disk /test to /data01 5. Once copied, test data exist and then remove /test 6. unmount /data01 7. Create mount with /test 8. mount disk to new mount - mount /dev/vdd1 on /test Hope steps are clear now.
... View more
12-04-2019
01:15 AM
Hi @Manoj690 You Database does not seems to be clean. It has already tables and schema existing. Probably you might have already ran below command manually - /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql Pls confirm. If YES then you need to clean database and start/install via ambari again.
... View more
12-04-2019
01:09 AM
Hi @Ba The only way is to copy/move the data to new mount point and then rename. There is no short steps to faster the operation. Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
12-03-2019
11:37 PM
Hi @Manoj690 Is this resolved ? If so please accept the best answer. If you are still facing issue please let us know the error. We can check this out.
... View more
12-03-2019
11:35 PM
1 Kudo
Hi @BaluD This is more of Unix question rathen than hadoop 🙂 Please try below command which will ignore all mounts and only give size details of filesystem/dirs which resides under "/" for a in /*; do mountpoint -q -- "$a" || du -s -h -x "$a"; done Let me know if that helps.
... View more
12-03-2019
06:29 AM
Hi @Peruvian81 there is no such option in ambari UI You can either check from Namenode UI --> datanode tab and see if the block counts are increasing.
... View more
12-02-2019
06:47 AM
Hi @Peruvian81 Once you add new datanode to cluster and if the replication starts you can see messages somehow like below in datanode logs - which signifies that new node is finalizing blocks written as well as receiving blocks from source node within replication. DataNode.clienttrace (BlockReceiver.java:finalizeBlock(1490)) - src: /<IPADDRESS>:45858, dest: /<IPADDRESS>:1019, bytes: 7526, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-646394656_1, offset: 0, srvID: 973c1ebc-7c88-4163-aea3-8c2e0f4f4975, blockid: BP-826310834-<IPADDRESS>-1480602348927:blk_1237811292_164146312, duration: 9130002
datanode.DataNode (DataXceiver.java:writeBlock(669)) - Receiving BP-826310834-<IPADDRESS>-1480602348927:blk_1237811295_164146315 src: /<IPADDRESS>:36930 dest: /<IPADDRESS>:1019
... View more
11-29-2019
01:38 AM
Hi @laplacesdemon Than you for the response and appreciation. I will be happy to contribute and share my experiences gong further. Thank you for accepting the answer.
... View more
11-28-2019
10:51 PM
@Manoj690 Can you remove password from previous comment. Just to avoid escalation of security standards. Can you share the commands you executed previously?
... View more
11-28-2019
08:11 PM
Hi @mike_bronson7 There is no generic/specific tool to monitor HDFS/KAFKA disk.Most client generally use the tool which is incorporated with OS vendor or either opt for thrid party tools. However there are multiple external tools available through which you can achieve this task - Nagios / OpsView /HPOMI are popular options I've seen. In our environment KAFKA is extensively used and we have HPOMI and Prometheus installed for monitoring.
... View more
11-28-2019
07:57 PM
Hi @Koffi You can test this out. I tried this in past and it worked for me. https://community.cloudera.com/t5/Support-Questions/How-to-Remove-all-External-Users-from-the-Ranger-Ranger/td-p/94987 Also do check https://issues.apache.org/jira/browse/RANGER-205 Hope that helps.
... View more
11-28-2019
03:28 AM
@Manoj690 login to mysql and follow step 2 from below link https://docs.cloudera.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari/content/amb_using_ambari_with_mysql_or_mariadb.html make sure you give. permission as per your ambari hostname(FQDN) also.
... View more
11-28-2019
02:43 AM
From Ambari node can you try - mysql -u <ambari_DB_username> <Ambari_DB_name> -h <DB_hostname> -P <Port> Check if you are able to successfully login to the DB from ambari node.
... View more
11-28-2019
02:19 AM
Hi @Manoj690 Now the error is different - "jdbc:mysql://xxxxx:3306/ambari" "ambari" "Password" com.mysql.jdbc.Driver
ERROR: Unable to connect to the DB. Please check DB connection properties.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications
link failure Probably check connection to your DB server from ambari node. Login to ambari server and telnet - $telnet <DB_HOST> 3306 Make sure IPtables/selinux is disabled. If you think initial issue is resolved please close this topic my accepting the correct reply on thread. Will be happy to help you for above issue. Please keep posted.
... View more
11-28-2019
02:06 AM
Hi @Manoj690 Can you copy - "/var/lib/ambari-server/resources/mysql-connector-java.jar" to "/usr/share/java/" and retry. Make sure you use correct java path for the java version you are pointing too.
... View more
11-28-2019
12:50 AM
Hi @Manoj690 Can you share output of below commands - $ ls -ltr /usr/share/java/mysql.jar $find / -name mysql.jar $find / -name mysql-connector-java*.jar
... View more
11-28-2019
12:47 AM
1 Kudo
Hi @laplacesdemon I agree with you and definately application/third party tools/components must be installed outside cluster or on individual new node to avoid major performance impacts. Regarding on how to manage the components if the hadoop version changes is pretty kind of devops question i feel. You always need to make some inventory of applications running along with your ecosystem components and their dependencies. Nearby you can use Nexus as centralized repository to fetch new versions which needs to be deployed on your application side[ie. Oracle Data Integrator and Jupyter Hub] with help of jenkins/some deployment tool. As per my experience i see resource related problems in case you think of installing application on edge nodes. So i will suggest that is not a good idea. Do revert if you have further points to highlight.
... View more
11-27-2019
06:48 PM
Hi @hdpmx My suggestion is to have always master components to be places on master nodes and not with worker nodes [like - datanodes,nodemanager, kafka brokers,etc..] Also i will suggest you to refer basic requirements for oozie service which will help to plan master workload accordingly. Please refer link - https://docs.cloudera.com/documentation/enterprise/release-notes/topics/hardware_requirements_guide.html#concept_ukj_yn1_jbb
... View more
11-27-2019
06:36 PM
Hi @baggysaggyDHS Yes, definitely you can view this info in your login portal. Please login to https://sso.cloudera.com/ Your company must have already got this credentials. If NOT you can register on the link and pass official address which is register with the account and you will receive password reset link. Navigate to - https://my.cloudera.com/account/applications where you can see list of all Active applications you have support for. For details regarding supported servers/cluster please open a support case from the portal If anyone from your team in past has already registered environment details then you can see the details of supported cluster in "Assets" sectionwhile you create new case as shown below - Hope that helps
... View more
11-27-2019
06:24 PM
Hi @Argin We will be happy to address your concern and guide you to resolve your issue.But we would like to ask if any specific reason to do unsubscribe? Apart - you can follow below steps to unsubscribe your account - You might be receiving emails to the "email id" which you have registered for your account. If you open any one of the email then at the bottom you can see as below - Click on "Manage your subscriptions" and it will take you to the other screen below - Click on "check all" and "Delete Selected subscription" if you do not want to receive any email Also click on "Notifications" tab and take appropriate actions as below - Finally click on "PERSONAL" and check the box as below -
... View more
11-26-2019
10:50 PM
Hi @Manoj690 BTW is the schema missing? Why are you creating it manually ? Anyhow You can run this via mysql cli as below - Login to mysql using Ambari user and password create/change DB Run the command Please check the below link for details - https://community.cloudera.com/t5/Support-Questions/Ambari-Mysql-setup/td-p/116774
... View more
11-26-2019
08:18 PM
Hi @China Cloudera CDH6.x version is production ready. But before you proceed to deploy for production do check version specific release notes [so make sure it meet as per your requirements] for "Known Issues and Limitations" notes. Below are links highlighted - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_requirements_supported_versions.html CDH6.0.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_610_known_issues.html CDH6.1.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_61_release_notes.html#cdh61x_release_notes CDH6.2.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_62_release_notes.html#cdh62x_release_notes CDH6.3.x - https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_63_release_notes.html#cdh63x_release_notes
... View more
11-26-2019
07:25 PM
Hi @rvillanueva As highlighted by you both screenshots/settings highlighted for AD/LDAP within Ranger differs. Please check below - Ranger Authentication For WebUI : The above screenshot describes how to configure the authentication method that determines who is allowed to login to the "Ranger web interface". So if you integrate Ranger with either LDAP/AD then users are LDAP or AD can be used to login to Ranger WebUI with respective credentials. The setting are configured via Ambari as below - Ambari Login->Services->Ranger->Configs->Advance->" Ranger Settings " Ranger Authentication for UNIX: The above setting configure Ranger to use Unix for user authentication. Which means user integrated from AD/LDAP can be configured within new/existing policies [within existing repositories created eg. HDFS, YARN] and access policies can be defined for those users as shown in screenshot below - If the AD/LDAP is not integrated for Ranger UNIX authentication the user will not be fetch/displayed in above "select user". This settings are configured as - Ambari Login->Services->Ranger->Configs->"Ranger User Info"" Let me know if that clears the difference.
... View more
11-26-2019
06:41 PM
Hi @Manoj690 Please ignore previous command. Can you confirm if the znode for hiveserver is created in zookeeper? Please run the below command from Ambari server node - /var/lib/ambari-server/ambari-sudo.sh su hive -l -s /bin/bash -c 'hive --config /usr/hdp/current/hive-server2/conf/ --service metatool -listFSRoot' Check for ambari server and hiveserver logs for any error. please pass the latest error here. Make sure permission on below dir are correct - # ls -ld /var/run/hive/
drwxr-xr-x 2 hive hadoop 60 Nov 20 07:18 /var/run/hive/
... View more
11-26-2019
07:37 AM
@ManuelCalvo we already checked from network side, and they mentioned no issue. how to debug this issue? currently we enable debug for worker.logs and relaunched topologies. Any more suggestions?
... View more
11-26-2019
02:39 AM
Hi @Manoj690 Can you just give a retry and check if it works? If not the login to Zookeeper from cli and check if znode for hiveserver2 is created or not. eg. /usr/hdp/<hadoop_version>/kafka/bin/kafka-topics.sh --zookeeper <ZK-server>
>ls / If the znode is not created then please try running below command once - /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/lib/ambari-agent/DBConnectionVerification.jar:/usr/hdp/current/hive-server2/lib/mysql-connector-java.jar org.apache.ambari.server.DBConnectionVerification 'jdbc:mysql://gaian-lap386.com/ambari1' ambari1 [PROTECTED] com.mysql.jdbc.Driver'
You might need to modify password in above command removing "PROTECTED"
... View more
11-25-2019
10:45 PM
Hi @ManuelCalvo Below is the output - [Note: for security reason i modified topic name below to test_c1] Describe output -
Topic:test_c1_prv_input_client_instruction PartitionCount:8 ReplicationFactor:4 Configs:retention.ms=604800000,retention.bytes=1073741824
Topic: test_c1_prv_input_client_instruction Partition: 0 Leader: 1001 Replicas: 1001,1003,1006,1004 Isr: 1003,1006,1004,1001
Topic: test_c1_prv_input_client_instruction Partition: 1 Leader: 1005 Replicas: 1005,1006,1004,1001 Isr: 1005,1006,1001,1004
Topic: test_c1_prv_input_client_instruction Partition: 2 Leader: 1007 Replicas: 1007,1004,1001,1005 Isr: 1007,1004,1001,1005
Topic: test_c1_prv_input_client_instruction Partition: 3 Leader: 1008 Replicas: 1008,1001,1005,1007 Isr: 1008,1007,1001,1005
Topic: test_c1_prv_input_client_instruction Partition: 4 Leader: 1002 Replicas: 1002,1005,1007,1008 Isr: 1007,1002,1008,1005
Topic: test_c1_prv_input_client_instruction Partition: 5 Leader: 1003 Replicas: 1003,1007,1008,1002 Isr: 1003,1007,1008,1002
Topic: test_c1_prv_input_client_instruction Partition: 6 Leader: 1006 Replicas: 1006,1008,1002,1003 Isr: 1003,1006,1008,1002
Topic: test_c1_prv_input_client_instruction Partition: 7 Leader: 1004 Replicas: 1004,1002,1003,1006 Isr: 1003,1006,1004,1002 3. I tried with console consumer, I am able to fetch data. I see the issue was that point of time.
... View more