Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 483 | 06-04-2025 11:36 PM | |
| 1013 | 03-23-2025 05:23 AM | |
| 536 | 03-17-2025 10:18 AM | |
| 2007 | 03-05-2025 01:34 PM | |
| 1257 | 03-03-2025 01:09 PM |
10-31-2016
05:26 PM
@Gary Cameron Whats the output of below command # hostname -f From MySQL prompt Mysql>show grants; Then run this grants statement after connect like root mysql> use rangerdb;
mysql> GRANT ALL PRIVILEGES ON * . * TO 'rangerdb'@'%';
mysql> Flush privileges;
mysql> quit; And retry it looks a privilege issue
... View more
10-27-2016
07:23 PM
@Roger Young That's now looks a bit tricky with raspberry pis in the picture. 1. Your sandbox should be configure to access the public repo unless you have downlaode the Ambari 2.x,HDP 2.x,HDP-UTILS 1.x etc
2. Have the same version of OS running on Raspberry as the Sandbox
3. Do the basic preparatory configuration for HDP installation.
4. Important is the network setup between the participating nodes otherwise you wont succeed in your installation.
First time someone sets HDP on raspberry. Reference
... View more
10-27-2016
06:39 PM
@Roger Young The tutorial will definitely work but you need to successfully install your standalone or HDF 2.0 Cluster you dont need to add nodes to your cluster if its a single node. Question: Are you deploying a single node or cluster ? Remember the HDF 2.0 installation follows the same preparatory steps like the HDP 2.x upto the Ambari Database install and there after you run the management PACK process which ensure that when you start your Ambari you ONLY have the HDF 2.0 repository available !!!
... View more
10-27-2016
05:50 PM
1 Kudo
@Roger Young HDF 2.0 cannot be installed on an existing HDP cluster ! It is not supported for installing on an Ambari instance with a deployed HDP cluster. HDF 2.0 has it's own Ambari and can be used to create a HDF cluster. See the below Link
... View more
10-26-2016
07:49 PM
@ANSARI FAHEEM AHMED #su - hdfs
$ hdfs dfsadmin -report -dead
... View more
10-26-2016
07:34 PM
@ANSARI FAHEEM AHMED I think the below command should sort you out if you are not in a kerberized environment #su - hdfs
$ hdfs dfsadmin -report -dead
$ hdfs dfsadmin -report -alive The dead and alive nodes will appear
... View more
10-25-2016
07:50 PM
@Sami Ahmad
Look at my code below I did exactly what you wantd to do and it work just copy and substitute the values to correspond with your environment it should work. And this is how you launch it ! Substitute the values to fit your setup /usr/bin/flume-ng agent -c /etc/flume-ng/conf -f /etc/flume-ng/conf/flume.conf -n agent #######################################################
# This is a test configuration created the 31/07/2016
# by Geoffrey Shelton Okot
#######################################################
# Twitter Agent
########################################################
# Twitter agent for collecting Twitter data to HDFS.
########################################################
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS
########################################################
# Describing and configuring the sources
########################################################
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.Channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = xxxxxxxx
TwitterAgent.sources.Twitter.consumerSecret = xxxxxxxx
TwitterAgent.sources.Twitter.accessToken = xxxxxxxx
TwitterAgent.sources.Twitter.accessTokenSecret = xxxxxxxx
TwitterAgent.sources.Twitter.Keywords = hadoop,Data Scientist,BigData,Trump,computing,flume,Nifi
#######################################################
# Twitter configuring HDFS sink
########################################################
TwitterAgent.sinks.HDFS.hdfs.useLocalTimeStamp = true
TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://namenode.com:8020/user/flume
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.WriteFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000
#######################################################
# Twitter Channel
########################################################
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 20000
#TwitterAgent.channels.MemChannel.DataDirs =
TwitterAgent.channels.MemChannel.transactionCapacity =1000
#######################################################
# Binding the Source and the Sink to the Channel
########################################################
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.HDFS.channels = MemChannel
########################################################
... View more
10-25-2016
05:08 PM
@Gary Cameron Yap that was an error but I am happy all is okay for you now .
... View more
10-24-2016
07:42 PM
2 Kudos
mysql -u root -p
CREATE USER ‘<HIVEUSER>’IDENTIFIED BY ‘<HIVEPASSWORD>’;
FLUSH PRIVILEGES;
mysql -u root -hive
create database hive;
FLUSH PRIVILEGES;
# mysql -u ambari -p
mysql> CREATE DATABASE <ambaridb>;
mysql> USE <ambaridb>;
mysql> SOURCE /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;
mysql>quit
# yum install mysql-connector-java
# chmod 644 /usr/share/java/mysql-connector-java.jar
# ambari-server setup
Checking JDK...
Enter advanced database configuration [y/n] (n)? y Configuring database...
Choose No 3 Mysql
.....
....
...ambari-admin-2.1.0.1470.jar ...
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.
Now continue with the Hive setup all should run successfully mysql -u root -p
CREATE USER ‘<HIVEUSER>’IDENTIFIED BY ‘<HIVEPASSWORD>’;
FLUSH PRIVILEGES;
mysql -u hive
create database hive;
FLUSH PRIVILEGES;
... View more
10-18-2016
05:08 PM
1 Kudo
@suresh krish When your hadoop cluster is being accessed by 1000's of users its best to use SSO hence AD/LDAP. For easy management of user credentials and maybe corporate security settings When you logon a node in an Hadoop cluster it basically gives you access to all the resources because say you logged on as TOM evenif someone had stolen your credentials it will believe you are indeed TOM and so will YARN and other components which in modern IT infrastruture is very dangerous with all the hacking ,DOS attacks etc. In a Kerberized environment Hadoop wont believe you are TOM it will ask you for a ticket analogy of a Passport at an Airport and to make sure the passport is not forged like the Migrations do it will check your ticket (passport) against its database to ascertain it was not stolen !!! ONLY after validating that you are really TOM then it will allow you to run queries or jobs on that cluster. That's quiet assuring isn't it. for documentation there should be some in this forum. If not I will need to mask some data if I am to provide you my production integration documentation. Happy Hadooping
... View more