Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 616 | 06-04-2025 11:36 PM | |
| 1182 | 03-23-2025 05:23 AM | |
| 585 | 03-17-2025 10:18 AM | |
| 2192 | 03-05-2025 01:34 PM | |
| 1376 | 03-03-2025 01:09 PM |
12-08-2017
08:35 AM
@Kumar Tg I have the same setup on a single node cluster, I think your atlas.graph.index.search.solr.zookeeper-url should be changed to infra-solr. I logged in my zookeeper to check the parameters. The vertex_index is accessible in infra-solr $ bin/zkCli.sh
.....
[zk: localhost:2181(CONNECTED) 0] ls /
[hive, registry, cluster, controller, brokers, zookeeper, infra-solr, hbase-unsecure, kafka-acl, kafka-acl-changes, admin, isr_change_notification, templeton-hadoop, hiveserver2, controller_epoch, druid, rmstore, hbase-secure, ambari-metrics-cluster, consumers, config]
[zk: localhost:2181(CONNECTED) 2] ls /infra-solr/collections
[vertex_index, edge_index, fulltext_index] Parameter to change atlas.graph.index.search.solr.zookeeper-url = ZOOKEEPER_HOSTNAME:2181/infra-solr Then try restarting Atlas. Please let me know
... View more
12-06-2017
04:46 PM
@Mark Nguyen The 3.0.0 GA expected 2017-12-23 but for sure sometimes in 2018 https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
... View more
12-06-2017
01:22 AM
1 Kudo
@Mark Nguyen Unfortunately, if you cluster is managed by Ambari you can't independently upgrade the components. It will be a nightmare if you plan to upgrade in future. HDP stack are rigorously tested and certified together as a bundle. Ambari currently only supports upgrading the entire HDP stack together. However, we are planning patch upgrades might be available in 3.0 which will allow a single component upgrade of the stack. Hope that answers your question.
... View more
12-06-2017
01:14 AM
@sbx_hadoop A lab environment or Dev should NEVER be in archive log mode !!! What version is your Oracle database maybe some old hacks could still work. Do you have command line access to the oracle server.? If so can you locate the initxxx.ora or spfile? usually in $ORACLE_HOME\dbs and check your db name to ensure you log on the correct database Linux boxes usually have many databases. Check if you can switch user to the user running the database process ! This should give you the oracle user $ ps -ef | grep oracle or ps -ef | grep pmon Run the below to set the correct variable from values you get from above export ORACLE_HOME=/path/to/oracle_home
export ORACLE_SID=xxxx Then as root switch to this user so have to have all the oracle variables in your path, You MUST log on as sysdba that's the only internal user allowed to access the database in such situations. eg if the owner owner/user is oradba # su - oradba Now you can invoke sqlplus $sqlplus /nolog
SQL>conn / as sysdba
SQL> archive log list; Check the Archive destination and delete all the logs SQL> shutdown immediate
SQL> startup mount
SQL> alter database noarchivelog;
SQL> alter database open;
SQL> archive log list; You should see Automatic archival Disabled Now you can proceed with restarting your Ambari server
... View more
12-05-2017
11:03 AM
1 Kudo
@Sedat Kestepe Can you delete the entry in zookeeper and restart # locate zkCli.sh
/usr/hdp/2.x.x.x/zookeeper/bin/zkCli.sh
# /usr/hdp/2.x.x.x/zookeeper/bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 8] ls /hadoop-ha/ You should see something like [zk: localhost:2181(CONNECTED) 8] ls /hadoop-ha/xxxxx Delete the Hdfs ha config entry [zk: localhost:2181(CONNECTED) 1] rmr /hadoop-ha Validate that there is no hadoop-ha entry, [zk: localhost:2181(CONNECTED) 2] ls / Then restart the all components HDFS service. This will create a new ZNode with correct lock(of Failover controller). Please let me know if that helped.
... View more
12-05-2017
12:47 AM
@Michael Bronson Can you instead do a # grep namenodes /etc/hadoop/conf/hdfs-site.xml Then get the values of the parameter dfs.ha.namenodes.xxxx Please let me know
... View more
12-04-2017
11:46 PM
@Michael Bronson Did you run your command as root instead of hdfs [root@master01 hdfs]# hdfs haadmin -getServiceState master02 Illegal argument: Unable to determine service address for namenode 'master02' To validate master02 can you check the names of the namenodes in hdfs-site.xml
... View more
12-04-2017
11:40 PM
@Michael Bronson The commands failed because the namenodes were already dead. The ls /hadoop-ha/hdfsha is now responding correctly .Election issue means the zookeeper can't put the namenode in Active and standby they all remain in a Can you restart the name nodes and get their status and quickly run with one command $ hdfs haadmin -getAllServiceState Check the health hdfs haadmin -checkHealth There is a difference in your command and the one I posted ,please use the correct serviceId $ hdfs haadmin -transitionToActive <serviceId> --forceactive and yours $ hdfs haadmin -transitionToActive master01 --forceactiv transitionToActive:
... View more