Member since
09-29-2015
286
Posts
601
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11478 | 03-21-2017 07:34 PM | |
2893 | 11-16-2016 04:18 AM | |
1619 | 10-18-2016 03:57 PM | |
4276 | 09-12-2016 03:36 PM | |
6239 | 08-25-2016 09:01 PM |
01-07-2016
05:00 PM
1 Kudo
Is this a kerberos enabled cluster?
... View more
01-06-2016
12:45 AM
@Ryan Tomczik Can you add the output to explain extended, i.e. set explain_level=extended;
explain SELECT * FROM latestposition WHERE regionid='1d6a0be1-6366-4692-9597-ebd5cd0f01d1' and id=1422792010 and deviceid='6c5d1a30-2331-448b-a726-a380d6b3a432';
... View more
01-05-2016
08:21 PM
3 Kudos
Tutorial Link SandboxVersion: HDP 2.3.2
AmbariVersion2.1.2
Hadoop stack version:Hadoop2.7.1.2.3.2.0-2950 ------------------------------------------- Issue 1: No available when executing command yum groupinstall “Development tools“ Resolution: This occurs when you copy and pasted the command. The double quotes used are wrong. Use " instead. There run the following command with the right double quotes: yum groupinstall "Development tools" The same occurs with pip install “ipython[notebook]“ Instead run pip install "ipython[notebook]" --------------------------------------- Issue 2: No ~/.ipython/profile_pyspark found after executing command ipython profile create pyspark Resolution: IPython was updated to 4.0.0 which uses jupyter Run jupyter notebook --generate-config Then nano /root/.jupyter/jupyter_notebook_config.py ------------------------------------------- Issue 3: - profile Error when executing "~/start_ipython_notebook.sh" Resolution: Use IPYTHON_OPTS="notebook" pyspark instead of IPYTHON_OPTS="notebook –profile pyspark" pyspark
... View more
Labels:
01-05-2016
07:32 PM
I believe if the Blueprint specified is using an
external Postgresql Database,this means that Ambari cannot use
hostname substitution.
... View more
01-05-2016
12:48 AM
1 Kudo
@terry Before Ambari 2.1.2, in order to have Ranger admin UI do authenticated binds to find role information we use to set these two properties as custom properties.
ranger.ldap.bind.dn
ranger.ldap.bind.password
Because of this the
value of ranger.ldap.bind.password was always displayed in cleartext. This was fix in Ambari 2.1.2 where the the
ranger.ldap.bind.password is specified as a password field so the value
is obscured. The relevant JIRA was AMBARI-12896
If you were referring to the the Ranger Usersync LDAP bind password that was only fixed in Ambari 2.2 Please double check which version of Ambari you are using.
... View more
01-05-2016
12:26 AM
@Peter Coates
A good strategy if you are able to is to add a few nodes at a time, for example two or three and then wait to have these nodes be allocated with new file data before adding others. If you add all ten nodes at once, then yes , the cluster would be moderate to severely imbalanced based on what your node count and utilization was before. You can also selectively put one or two existing data nodes into maintenance mode and shut it down, and wait for its blocks to replicate before bringing it back up again.
The rebalancer does not defer to normal processing dynamically. Changing dfs.network,bandwidth.persecond setting to be higher in off hours sounds reasonable to me.
... View more
01-05-2016
12:05 AM
2 Kudos
Ranger is just for authorization. For central authentication, you can authenticate against an LDAP or AD.
For local authentication, you can authenticate as a local unix user. For true secure authentication, you need Kerberos with either a MIT KDC or AD as your KDC. Yes it is possible without Kerberos to spoof a user. See also this HCC post Kerberos, AD, Ranger
... View more
01-04-2016
04:55 PM
5 Kudos
Tutorial Link SandboxVersion: HDP 2.3.2
AmbariVersion2.1.2
Hadoop stack version:Hadoop2.7.1.2.3.2.0-2950 Issue 1: Error initializing SparkContext when executing spark-shell command When you issue the command as root spark-shell --master yarn-client --driver-memory 512m --executor-memory 512m You would would receive the error: ERROR SparkContext: Error initializing SparkContext.org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.sparkStaging/application_1451921066894_0001":hdfs:hdfs:drwxr-xr-x
Resolution sudo su - hdfs
hdfs dfs -mkdir /user/root
hdfs dfs -chown root:hdfs /user/root
exit -------------------------------------------------------- Issue 2
... View more
Labels:
01-04-2016
04:15 PM
Tutorial Link Sandbox Version: HDP 2.3.2
Ambari Version 2.1.2
Hadoop stack version: Hadoop 2.7.1.2.3.2.0-2950 Issue 1: Permission Denied Copy the data over to HDFS on Sandbox with the following command will result in a Permission denied error. hadoop fs -put ~/Hortonworks /user/guest/Hortonworks Resolution: See Hands-on Spark Tutorial: Permission Denied ------------------------------
... View more
12-31-2015
09:02 PM
2 Kudos
Specifically for /user/guest/Hortonworks do this:
sudo su -hdfs
hadoop fs -chmod -R 777 /user/guest
exit
... View more