Member since
01-18-2016
163
Posts
32
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1381 | 04-06-2018 09:24 PM | |
1406 | 05-02-2017 10:43 PM | |
3847 | 01-24-2017 08:21 PM | |
23577 | 12-05-2016 10:35 PM | |
6459 | 11-30-2016 10:33 PM |
11-22-2016
01:49 AM
@Venkat Rangan - I think I found the documentation you need to become the admin user: As the default "cloudbreak" user doesn't have certain permissions (for example, it has no write access to HDFS), you must use the "admin" user to perform certain actions. To use the "admin" user instead of the default "cloudbreak" user, run sudo su - admin . (http://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.8.0/bk_hdcloud-aws/content/using/index.html)
... View more
11-22-2016
01:45 AM
@Venkat Rangan I'm sorry, but at the moment I don't know how to execute commands as hdfs in cloudbreak, but maybe - the cloudbreak user *may* be able to sudo (sudo -u hdfs <COMMAND>). If you are not familiar with sudo, do it something like this: sudo -u hdfs hdfs dfs -mkdir /user/<username> Don't let the "hdfs hdfs" together confuse you. The first one is the username and the second one is the command. Give that a shot and let me know.
... View more
11-21-2016
09:10 PM
From the command line as the hdfs user: # Create the directory $ hdfs dfs -mkdir /user/<username> # Set permissions and ownership $ hdfs dfs -chown <username> /user/<username> $ hdfs dfs -chmod 700 /user/<username> ##Optionally set $ hdfs dfsadmin -setSpaceQuota <bytes_allocated> /user/<username> ## Where bytes_allocated is bytes allowed for this directory (counting replication). This is allocating space for the directory, not by username. So if the user created files in other HDFS directories, this doesn't control that.
... View more
11-17-2016
03:09 PM
Awesome. If it continues to happen you'll need to figure out why you're getting OOM (assuming that was what was happening). Intermittent exceptions are often a symptom of OOM exceptions. Solr loves memory but there are a lot of factors that can contribute to it. Sometimes giving the JVM more memory is the solution, but not always. Good luck.
... View more
11-16-2016
10:48 PM
@Wing Lo - The solution may depend on what is actually wrong. It may be that the node is just out of memory. If that's the case, a restart may resolve the issue (but it could occur again). If the index data is actually corrupt, you can take the bad node offline and the errors will stop. However, if the index is actually corrupt, you will need to fix/replace the bad data. I have not used this technique, but you might look at this https://support.lucidworks.com/hc/en-us/articles/202091128-How-to-deal-with-Index-Corruption
... View more
11-16-2016
10:28 PM
@Daniel Scheiner To add to the point, I believe that in the future HDP and HDF will be able to use the same Ambari host, but it is not possible currently as Constantin said. This blog post says, "Currently, nodes can not be shared between HDP and HDF. Completely separate clusters (each with its own Ambari and Ranger) are required at this point."
... View more
11-16-2016
02:12 PM
@Prem Kripalani I'm glad yo got it worked out. Sorry to hear that it was such a pain.
... View more
11-16-2016
03:25 AM
Awesome. Glad you got it!
... View more
11-16-2016
03:23 AM
@Prem Kripalani That is weird. I'm looking at code for nifi 1.0.0 and line 82 does not have JAVAHOME anywhere near it. In fact, I don't see JAVAHOME anywhere. Typically that variable has an underscore "_" like JAVA_HOME. The error message "syntax error near unexpected token `newline'" is associated with an invalid redirect ">" with a newline after it. Can you find the file nifi.sh and look around line 82 to see if you see anything about JAVAHOME or if you see a redirect with no filename after it? This could be an empty variable after the ">" as well. In any case, it may be easier to reinstall Nifi than to track it down.
... View more