Member since
04-08-2016
48
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6615 | 04-15-2016 11:18 PM |
01-22-2019
10:01 AM
Hi, In order to find the adequate space in any directory during installation or upgrade procedures, for example while doing HDP upgrade you should verify about the availability of adequate space on /usr/hdp for the target HDP version. For that use below format: df -h <Path_of_interest> Example : [alex@machine1]# df -h /usr/hdp/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-root 528G 22G 506G 5% / [alex@machine1]# You can all parameters like Size of disk, used space, available spave and percentage of usage.
... View more
05-01-2016
12:32 AM
Thanks again. It worked. I have performed the cahnges and also fixed a permission issue around the /storm/ folder on hdfs. Things seem to be working very well now and I don't see any errors on the Storm UI. However when I go to the HIVE view and try to do a simple SELECT * FROM tweet_counts LIMIT 10; here are the exceptions I get (see below). Have you ever run into this? I have been investigating, but not sure where this is coming from... ps. ambari is the name of the db where I created the tweet_counts table on my HIVE instance. {"trace":"org.apache.ambari.view.hive.client.HiveErrorStatusException: H170 Unable to fetch results. java.io.IOException: java.io.FileNotFoundException: Path is not a file: /apps/hive/warehouse/ambari.db/tweet_counts\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:75)\n\tat org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1828)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1712)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:652)\n\tat .......
... View more
04-22-2016
07:46 AM
It's a workaround but in general I would avoid reserved words like 'user' in schema because you always have to set that property otherwise. Rename your data to 'usr'. If this solves your question, please accept the best answer to close this thread.
... View more
04-22-2016
02:10 AM
FROM salary employees will syntax above alias the salary table as employees. I guess your query needs to join on employees table with left or inner join? Also you will probably not want to overwrite the table you are selecting from? INSERT OVERWRITE TABLE employees SELECT employees.<all columns but salary_date>, salary.salary_date FROM salary inner join employees on salary.employee_number = employee.employee_number;
... View more
04-18-2016
04:59 AM
Yes, you have a permission problem on your input file: Permission denied: user=admin, access=WRITE, inode="/employees/part-m-00000":hdfs:hdfs:drwxr-xr-x As an immediate remedy you can change permissions, for example: su - hdfs -c "hdfs dfs -chmod -R +w /employees/" Long term, it's the best to run all you commands using an end-user account (not hdfs, root, admin etc.), In Sandbox-2.4 you can use a user called "maria-dev". So when you run sqoop, do "su - maria_dev" first, and run your commands, and when you use Ambari views, login into Ambari also as maria_dev. In this way you can avoid permissions issues. Edit: Before doing "su - maria_dev", create the user "maria_dev" on the local OS, run this as root: "useradd maria_dev". This is a one-time prep operation.
... View more
12-19-2017
12:28 PM
I think "jdbc:mysql//master.centos:3306/employees" is not a valid jdbc url for mysql. Try to add ':' character after jdbc:mysql:
... View more
05-11-2017
03:49 PM
Thanks I had same issue after HDP2.6 upgrade. The install silently chnaged the seetings. 1- connect to Ambari 2- hdfs service > advanced config > Custom core-site and change this: hadoop.proxyuser.hive.groups = * hadoop.proxyuser.hive.hosts = * hadoop.proxyuser.hcat.groups = * hadoop.proxyuser.hcat.hosts = * This solved my issue as well
... View more