Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 923 | 06-04-2025 11:36 PM | |
| 1525 | 03-23-2025 05:23 AM | |
| 756 | 03-17-2025 10:18 AM | |
| 2703 | 03-05-2025 01:34 PM | |
| 1801 | 03-03-2025 01:09 PM |
08-26-2018
07:34 PM
@Indrek Mäestu Can you do the following # yum clean all
# yum repo list Then # yum install -y slider_3_0_1_1_5 And update this thread
... View more
08-24-2018
12:01 PM
@M Ax It seems to me you are connected to the default hive database and trying to create a view which already exists in that database can you change the SQL to create view if not exists test20 as select1; Or just run the below to validate create a test database Create database max;
use max;
create view if not exists test20 as select1; If the above runs fine then it will confirm your initial error was due to the existing object (view test) in the default database. HTH
... View more
08-24-2018
09:03 AM
@Mathi Murugan I see that currently ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role; does not give the desired fine-grained option for only a specific table in a schema/database, so I think the best option would be to use Ranger and give a select privilege on the particular database or underlying table to the new user who can then issue a Create Table As Select (CTAS) which will automatically change the ownership to the issuer of the CTAS. HTH
... View more
08-18-2018
07:02 PM
@Pankaj Singh A data lake is simply a storage repository that holds a vast amount of raw data in its native format until it is needed. While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data. Enterprise data lakes run in Petabytes (PB) or Exabytes (EB), just think of a data lake as a massive storage area where you ingestions and data pipelines land all the data, you can create a directory like structures like below see attached screenshot for some visual aid /landing_Zone/Raw_data/refined
/landing_Zone/Raw_data/Trusted
/landing_Zone/Raw_data/sandbox And typically you apply ranger policies to manages the access and data encryption etc. There are also tools like Alation to mention but a few for managing the catalog by data stewards etc. The data lake can be used also to feed upstream systems for real-time monitoring system or long storage like HDFS or hive for analytics HTH
... View more
08-16-2018
04:40 PM
@Nanda Kumar Personally, I have not come across an installation with 2 versions on the same host. Yes, files will be overwritten by the newer versions. I advise you to run the different version in Oracle VM, that gives you a physical separation ,economic and yet multi-tenant. HTH
... View more
08-16-2018
11:02 AM
@Sudharsan Ganeshkumar You are not seeing anything because you are running the command as root user ! You will have to switch to the hive user and use hive or beeline # su - hive
$ hive Then at the prompt run the create statement hive> CREATE TABLE IF NOT EXISTS emp ( eid int, name String,
salary String, destination String)
COMMENT ‘Employee details’
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’
LINES TERMINATED BY ‘\n’
STORED AS TEXTFILE; And then run hive> show table emp; HTH
... View more
08-15-2018
11:02 PM
@Kumar Veerappan "Can't get Kerberos realm (state=08S01,code=0)" is the expected errors stack, because your Mac OS doesn't know of the REALM. You will need to copy the file /etc/krb5.conf from your cluster, this file contains the connection information to your REALM. Please have a look at this Mac OS link it should be of help, unfortunately, I am on Windows. HTH
... View more
08-13-2018
12:17 PM
@rinu shrivastav If you would want to have fixed number of reducer at runtime, you can do it while passing the Map/Reduce job at the command line. Using “-D mapred.reduce.tasks” with the desired number will spawn that many reducers at runtime. The number of Mappers for a MapReduce job is driven by number of input splits. And input splits are dependent upon the Block size. For eg If we have 500MB of data and 128MB is the block size in hdfs , then approximately the number of mapper will be equal to 4 mappers. When you are running an hadoop job on the CLI you can use the -D switch to change the default of mappers and reducers can be settings like (5 mappers, 2 reducers): -D mapred.map.tasks=5 -D mapred.reduce.tasks=2 Example bin/hadoop jar -Dmapreduce.job.maps=5 yourapp.jar HTH
... View more
08-01-2018
11:32 PM
@Harry Li Can you run the below as root on against your Mysql databases, assuming again the user/password and db is hive GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'localhost' IDENTIFIED BY 'hive';
flush privileges;
quit; The retry I see now is a permission issue. HTH
... View more
08-01-2018
10:53 PM
@Harry Li Can you share your the output of ,please adopt were necessary the proper user root or use sudo # hostname -f Set the hostname sudo hostnamectl set-hostname your-new-name Now you will need to edit 2 files the /etc/hostname and /etc/hosts file and replace the hostname with your earlier choice above sudo -H gedit /etc/hostname
sudo -H gedit /etc/hosts Without restarting your machine, just run the command below to restart the hostname service to apply changes: sudo systemctl restart systemd-logind.service When editing the /etc/ hosts, don't tamper with the first line is always going to be for localhost with the loopback IP address.The second line is where you change the hostname 127.0.0.1 localhost
127.0.0.1 your_new_hostname If you want to reference the hostname with the server public IP address and not the loopback, you can add a third line with server public IP and hostname. 127.0.0.1 localhost
127.0.0.1 your_new_hostname
10.56.100.30 your_new_hostname # server IP and hostname That should resolve your problem. Please revert
... View more