Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 849 | 06-04-2025 11:36 PM | |
| 1429 | 03-23-2025 05:23 AM | |
| 716 | 03-17-2025 10:18 AM | |
| 2571 | 03-05-2025 01:34 PM | |
| 1680 | 03-03-2025 01:09 PM |
12-15-2017
06:49 PM
@Abhishek Reddy Chamakura Can you tell me the Type and OS version, I want to quickly reproduce your steps with Ambari 2.5.0.3 and HDP-2.3.4.0 175 can you maybe send the links you used for the downloads.
... View more
12-14-2017
10:30 PM
@Abhishek Reddy Chamakura Could you run these 4 commands on the Ambari server, is there a specific reason that you are installing an old HDP 2.3 version when you have HDP 2.6 # yum erase ambari-server # yum clean all # yum repolist # yum install -y ambari-server Please revert
... View more
12-12-2017
01:31 PM
@Mudassar Hussain Assuming no Kerberos but you want your user to access the HDP cluster ,usually, the local users are on the edgenode. To apply HDFS ACL the local user should have a home in hdfs Create a local user on edge node here my user toto doesn't belong to any group for demo purposes. # useradd toto Before you can implement HDFS acl's you MUST add the below property in hdfs-site.xml or custom-hdfs-site in the namenode the default value is false, then restart the all the stale configs dfs.namenode.acls.enabled=true As the HDFS user create a directory acldemo in toto user home in HDFS $ hdfs dfs -mkdir /user/toto/acldemo As HDFS user change the ownership $hdfs dfs -chown toto:hdfs /user/toto/acldemo created 3 dummy files and copied then to hdfs $ hdfs dfs -put test2.txt test3.json test.txt /user/toto/acldemo Validate the copy process $ hdfs dfs -ls /user/toto/acldemo
-rw-r--r-- 3 hdfs hdfs 0 2017-12-12 13:38 /user/toto/acldemo/test.txt
-rw-r--r-- 3 hdfs hdfs 0 2017-12-12 13:38 /user/toto/acldemo/test2.txt
-rw-r--r-- 3 hdfs hdfs 0 2017-12-12 13:38 /user/toto/acldemo/test3.json Set ACL on the directory acldemo for different users namely toto,hive,kafka to see all the subcommands type hdfs dfs and hit ENTER user toto has RWX $ hdfs dfs -setfacl -m user:toto:--- /user/toto/acldemo User hive has Read Write $ hdfs dfs -setfacl -m user:hive:rwx /user/toto/acldemo User Kafka has only READ $ hdfs dfs -setfacl -m user:kafka:r-x /user/toto/acldemo To check the current ACL's $ hdfs dfs -getfacl /user/toto/acldemo
# file: /user/toto/acldemo
# owner: toto
# group: hdfs
user::rwx
user:hive:rwx
user:kafka:r-x
user:toto:---
group::r-x
mask::rwx
other::r-x
Now to check whether the permissions work For user Kafka he can read but NOT copy any files to [kafka@host]$ hdfs dfs -put kafak.txt /user/toto/acldemo
put: Permission denied: user=kafka, access=WRITE, inode="/user/toto/acldemo/kafak.txt._COPYING_":toto:hdfs:drwxrwxr-x
[kafka@host ~]$ hdfs dfs -cat /user/toto/acldemo/test.txt
If you can read me then you have the correct permisions
User toto has no permissions !! [toto@host]$ hdfs dfs -cat /user/toto/acldemo/test.txt cat: Permission denied: user=toto, access=EXECUTE, inode="/user/toto/acldemo/test.txt":toto:hdfs:drwxrwxr-x For user hive exit code 0 "success" because it can read the contents of the text.txt file in hdfs [hive@host]$ hdfs dfs -cat /user/toto/acldemo/test.txt
If you can read me then you have the correct permisions
To know whether a directory has ACL's notice the + sign on the last bit $ hdfs dfs -ls /user/toto/
Found 1 items
drwxrwxr-x+ - hdfs hdfs 0 2017-12-12 14:15 /user/toto/acldemo Hope that helps
... View more
12-08-2017
08:35 AM
@Kumar Tg I have the same setup on a single node cluster, I think your atlas.graph.index.search.solr.zookeeper-url should be changed to infra-solr. I logged in my zookeeper to check the parameters. The vertex_index is accessible in infra-solr $ bin/zkCli.sh
.....
[zk: localhost:2181(CONNECTED) 0] ls /
[hive, registry, cluster, controller, brokers, zookeeper, infra-solr, hbase-unsecure, kafka-acl, kafka-acl-changes, admin, isr_change_notification, templeton-hadoop, hiveserver2, controller_epoch, druid, rmstore, hbase-secure, ambari-metrics-cluster, consumers, config]
[zk: localhost:2181(CONNECTED) 2] ls /infra-solr/collections
[vertex_index, edge_index, fulltext_index] Parameter to change atlas.graph.index.search.solr.zookeeper-url = ZOOKEEPER_HOSTNAME:2181/infra-solr Then try restarting Atlas. Please let me know
... View more
12-06-2017
04:46 PM
@Mark Nguyen The 3.0.0 GA expected 2017-12-23 but for sure sometimes in 2018 https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3.0.0+release
... View more
12-06-2017
01:22 AM
1 Kudo
@Mark Nguyen Unfortunately, if you cluster is managed by Ambari you can't independently upgrade the components. It will be a nightmare if you plan to upgrade in future. HDP stack are rigorously tested and certified together as a bundle. Ambari currently only supports upgrading the entire HDP stack together. However, we are planning patch upgrades might be available in 3.0 which will allow a single component upgrade of the stack. Hope that answers your question.
... View more
12-06-2017
01:14 AM
@sbx_hadoop A lab environment or Dev should NEVER be in archive log mode !!! What version is your Oracle database maybe some old hacks could still work. Do you have command line access to the oracle server.? If so can you locate the initxxx.ora or spfile? usually in $ORACLE_HOME\dbs and check your db name to ensure you log on the correct database Linux boxes usually have many databases. Check if you can switch user to the user running the database process ! This should give you the oracle user $ ps -ef | grep oracle or ps -ef | grep pmon Run the below to set the correct variable from values you get from above export ORACLE_HOME=/path/to/oracle_home
export ORACLE_SID=xxxx Then as root switch to this user so have to have all the oracle variables in your path, You MUST log on as sysdba that's the only internal user allowed to access the database in such situations. eg if the owner owner/user is oradba # su - oradba Now you can invoke sqlplus $sqlplus /nolog
SQL>conn / as sysdba
SQL> archive log list; Check the Archive destination and delete all the logs SQL> shutdown immediate
SQL> startup mount
SQL> alter database noarchivelog;
SQL> alter database open;
SQL> archive log list; You should see Automatic archival Disabled Now you can proceed with restarting your Ambari server
... View more
12-05-2017
11:03 AM
1 Kudo
@Sedat Kestepe Can you delete the entry in zookeeper and restart # locate zkCli.sh
/usr/hdp/2.x.x.x/zookeeper/bin/zkCli.sh
# /usr/hdp/2.x.x.x/zookeeper/bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 8] ls /hadoop-ha/ You should see something like [zk: localhost:2181(CONNECTED) 8] ls /hadoop-ha/xxxxx Delete the Hdfs ha config entry [zk: localhost:2181(CONNECTED) 1] rmr /hadoop-ha Validate that there is no hadoop-ha entry, [zk: localhost:2181(CONNECTED) 2] ls / Then restart the all components HDFS service. This will create a new ZNode with correct lock(of Failover controller). Please let me know if that helped.
... View more
12-05-2017
12:47 AM
@Michael Bronson Can you instead do a # grep namenodes /etc/hadoop/conf/hdfs-site.xml Then get the values of the parameter dfs.ha.namenodes.xxxx Please let me know
... View more