Member since
01-19-2017
3662
Posts
625
Kudos Received
367
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
90 | 03-03-2025 01:09 PM | |
54 | 03-02-2025 07:19 AM | |
493 | 12-22-2024 07:33 AM | |
307 | 12-18-2024 12:21 PM | |
332 | 12-18-2024 08:50 AM |
02-03-2016
07:48 PM
1 Kudo
@rbalam I have attached a document on how to install your Ambari server successfully you will not find any better document than this .Please enjoy and revert set-up-mysql-for-ambari.pdf I forgot to include the syntax to connect remotely to MySQL server but it should be fine The Ambari Server host needs the mysql-connector-java driver installed to be able to communicate with the database you will need to install it on the local and remote server
... View more
02-03-2016
10:49 AM
@Rainer Geissendoerfer To fix this, SSH into your HDP instance VM and edit: /etc/hadoop/conf/core-site.xml and change the following config to add “localhost”. Save and restart the relevant services or just reboot your HDP VM instances. <property> <name>hadoop.proxyuser.hive.hosts</name> <value>sandbox.hortonworks.com,127.0.0.1,localhost</value> </property> core-site.xml
... View more
02-03-2016
07:43 AM
3 Kudos
A single server production setup is not recommended at all the minimum configuration for a production environment is 3 or 5 odd numbers because of the default 3 replication factor. Dev and TEST doesn't need to as big as prod. DR is required too and we can use DR for reporting Please check the minimum requirements for production setup
... View more
02-03-2016
07:28 AM
1 Kudo
@Majid Ali Syed Amjad Ali Sayed A 2 node cluster shouldn't pose alot of problems, YES the agent and Ambari server can runon te same host Make sure you did all the belows steps successfully as the root user 1. Validate that your network setup is correct IP,Hostname FQDN etc 2. Run ssh-keygen and copy the id_rsa generated key to all the hosts and configuring passwordless SSH connect between the 2 servers for root user 2. Disable firewall / iptables on all the cluster hosts 3. Disable SELinux 4. Disable Transparent Huge Pages (THP) 5. Configure Database for ambari assuming you are using MySQL and dont forget to yum install mysql-connector-java and the appropriate permissions 644 and pre-load the Ambari database schema into your MySQL database 6. yum install ambari-server 7. ambari-server setup output Ambari Server 'setup' completed successfully. 8. Start the server #ambari-server start 1. Run ssh-keygen and copy the id_rsa generated key to all the hosts 2. Disable firewall / iptables on all the cluster hosts 3. Disable SELinux 4. Disable Transparent Huge Pages (THP) 6. Configure Database for ambari assuming you are using MySQL and dont forget to yum install mysql-connector-java and the appropriate permissions 644 and pre-load the Ambari database schema into your MySQL database I recently did a 4 node cluster install without tweaking the proxy in ambari-env.sh Could you paste the screen shot and logs in here
... View more
02-02-2016
09:38 PM
@Vinayak Agrawal Are you running the sqoop job as user yarn? if not the user running the sqoop job should have the appropriate permissions eg 777 /tmp/hive/yarn/_tez_session_dir/xxxxx
... View more
02-02-2016
04:21 PM
You need to grant bigotes an ADMIN role but try the below method first Here are the points: 1. You should change all setting with Ambari. 2. Don't change hive.metastore.uris-setting 3. Manually change hiveserver2-site.xml property of hive.security.authorization.manager <property> <name>hive.security.authorization.enabled</name> <value>true</value> </property> <property> <name>hive.security.authorization.manager</name>
<value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory</value> </property> <property> <name>hive.security.authenticator.manager</name>
<value>org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator</value> </property> <property> <name>hive.metastore.uris</name> <value>''</value> </property> <property> <name>hive.conf.restricted.list</name>
value>hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager</value>
</property> 4. Copy hiveserver2-site.xml to /etc/hive/conf.server/ 5. Restart hiveserver2 6. Use only beeline for SQL permissions
... View more
02-02-2016
04:03 PM
Whats the output of these 2 scripts SHOW CURRENT ROLES; SHOW ROLES;
... View more
02-02-2016
03:27 PM
Open source sqoop or Syncsort
... View more
02-02-2016
03:05 PM
1 Kudo
Try increasing the number of mappers by appending --m 5 at the end instead on --m 1
... View more
02-02-2016
02:54 PM
The Oozie coordinator supports a very flexible data dependency–based triggering framework. It is important to note that the concept of data availability–based scheduling is a little more involved than time-based triggering.
Use Oozie bundle which is a collection of Oozie coordinator applications with a directive on when to kick off those coordinators.Bundles can be started, stopped, suspended, and managed as a single entity instead of managing each individual coordinator that it’s composed of. This is a very useful level of abstraction in many large enterprises. These data pipelines can get rather large and complicated, and the ability to manage them as a single entity instead of meddling
with the individual parts brings a lot of operational benefits.
... View more