Member since
09-11-2015
115
Posts
126
Kudos Received
15
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3046 | 08-15-2016 05:48 PM | |
2857 | 05-31-2016 06:19 PM | |
2397 | 05-11-2016 03:10 PM | |
1857 | 05-10-2016 07:06 PM | |
4746 | 05-02-2016 06:25 PM |
11-10-2021
12:54 AM
Hi, it should. But when You need to use certs signed with Your organisation use: convert .p12 to pfx (you will need also pem file) openssl pkcs12 -export -out YOUROWNNAME.pfx -inkey YOUR_KEYS.pem -in YOUR_KEYS.pem -certfile YOUR_KEYS.pem When You manage to get pfx file use: keytool -importkeystore -srckeystore gateway.pfx -srcstoretype pkcs12
-srcalias [ALIAS_SRC] -destkeystore [MY_KEYSTORE.jks]
-deststoretype jks -deststorepass [PASSWORD_JKS] -destalias gateway-identity [ALIAS_SRC] - read from pfx file to do that use: keytool -v -list -storetype pkcs12 -keystore YOUROWNNAME.pfx At end use this: mv gateway.jks /var/lib/knox/data-2.6.4.0-91/security/keystores/
... View more
06-30-2020
01:06 AM
I see that you use Active DIrecyory Did you use the below property? +++ <property> <name>hive.server2.authentication.ldap.Domain</name> <value>AD_Domain</value> </property> +++
... View more
05-19-2016
03:59 AM
10 Kudos
SmartSense is an excellent tool for keeping your cluster running at optimal efficiency while maintaining operational best practices. We’ve combined knowledge from the greatest minds in the industry, and use it to analyze metadata about your cluster from the bundles you submit. Have you ever wondered exactly what data you’re sending to SmartSense? The SmartSense Admin Guide contains a high-level description (see What’s Included in a Bundle), but for the greatest understanding you should extract a bundle and explore it with your own eyes! Obtain a Bundle There are two types of bundles... Analysis Bundle: configs and metrics for all services on all hosts Troubleshooting Bundle: Analysis Bundle + logs for selected service(s) To begin, let’s capture an Analysis Bundle: ...and download an unencrypted copy to our local machine: The bundle is a gzipped tar file that contains a gzipped tar file from each host running the HST Agent. In the following examples, notice the bundle variable excludes the .tgz extension. Linux or OS X users can extract everything with a bash for-loop: bundle=a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35
tar zxf $bundle.tgz && cd $bundle && for i in * ; do tar zxf "$i" ; rm "$i" ; done Windows users can use a similar process with a utility like 7-Zip. Assuming 7z.exe is in your path: setlocal
set bundle=a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35
7z x %bundle%.tgz && 7z x %bundle%.tar && rm %bundle%.tar && cd %bundle%
for %i in (*.tgz) do 7z x %i && rm %i
for %i in (*.tar) do 7z x %i && rm %i
endlocal Exploring Bundle Contents NOTE: Example console output was obtained from a SmartSense 1.2.1 bundle and may differ in future versions. The output is also truncated for brevity. You’re encouraged to follow along with a bundle from your own cluster. For a convenient overview of the bundle contents, use the tree command, limited to a depth of 3: MyLaptop:a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35 myuser$ tree -L 3
.
├── meta
│ └── metadata.json
├── mgmt.zoeocuz.com-a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35
│ ├── os
│ │ ├── logs
│ │ └── reports
│ └── services
│ ├── AMBARI
│ ├── AMS
│ ├── HDFS
│ ├── HST
│ ├── MR
│ ├── TEZ
│ ├── YARN
│ └── ZK
├── node1.zoeocuz.com-a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35
│ ├── os
│ │ ├── logs
│ │ └── reports
│ └── services
│ ├── AMBARI
...
41 directories, 4 files At the root of the bundle, we see a ‘meta’ folder, and a folder per host. The meta folder contains some bundle metadata. Note that domain names are anonymized (my cluster uses example.com). Let’s take a look inside the two subfolders (os & services) per host... Bundle Contents: OS The os folder contains a couple system logs and a variety of reports. Here’s a sample from my cluster: MyLaptop:node1.zoeocuz.com-a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35 myuser$ tree -I "blockdevices" os/
os/
├── logs
│ └── messages.log
└── reports
├── chkconfig.txt
├── cpu_info.txt
├── dns_lookup.txt
├── dstat.txt
├── error_dmesg.txt
├── file_max.txt
...
5 directories, 49 files Most of the filenames here are self-explanatory. Reports generally contain output from system commands or the /proc filesystem. These system characteristics serve as valuable inputs for determining your cluster’s optimal configuration. Bundle Contents: Services Within each host folder, the services subfolder contains configurations and reports for every HDP service on that host. Here’s an example from my node1: MyLaptop:node1.zoeocuz.com-a-00000000-c-00000000_supportlab_0_2016-05-17_23-05-35 myuser$ tree -L 3 services
services
├── AMBARI
│ ├── conf
│ │ ├── ambari-agent.ini
│ │ ├── ambari-agent.pid
│ │ └── logging.conf.sample
│ └── reports
│ ├── ambari_rpm.txt
│ ├── postgres_rpm.txt
│ ├── postmaster.txt
│ └── process_info.txt
├── AMS
│ ├── conf
│ │ ├── ams-env.sh
│ │ ├── metric_groups.conf
│ │ └── metric_monitor.ini
│ ├── metrics
│ │ └── ams
│ └── reports
│ └── ams_rpm.txt
...
32 directories, 157 files The conf folders are copied from their respective locations under /etc/ (or /var/run for the .pid files). Reports contain JMX metrics and output from CLI commands, such as the YARN application list. You can explore the contents using text processing commands like grep, sort, and uniq, which might be sufficient for your needs. Another option is to use a text editor with a file-tree view. Text Editors Here are three open source text editors that integrate a file-tree for easy navigation (see attachments at the bottom for full-size images)... TextMate 2 (OS X): Notepad++ (Windows): Vim + NerdTree (Linux, OS X): Anonymization Rules The default set of anonymization rules will protect IP addresses, hostnames, and password fields in standard HDP configuration files. You can modify or add anonymization rules if desired. Watch for a future HCC article where we take a deep dive into anonymization. After making any changes to the anonymization ruleset, it is wise to verify everything is still functioning as intended. This can be accomplished by downloading an unencrypted bundle and examining its contents using the methods described above. Until Next Time... Keeping in mind that we only looked within a single host folder, and that my demo cluster has the minimum number of components for a functioning HDP stack, we can see that every bundle is packed with useful information.
Knowing exactly what’s included in a SmartSense bundle provides peace of mind, and the trust that your confidential data remains secure and private.
... View more
Labels:
05-18-2016
07:32 PM
Good find! Here's a copy of the workaround: Replace /var/lib/knox/data/services/yarn-ui/2.7.1/rewrite.xml with the attached rewrite.xml (change ownership to knox:knox) Restart Knox Note that "data" might be version-specific (e.g. data-2.4.2.0-258), or you can use /usr/hdp/current/knox-server/data/ instead. The fixed rewrite.xml is attached.
... View more
06-02-2017
03:16 AM
2 Kudos
For SmartSense versions 1.3.0 and above, we can use below CLI to regenerate the SSL keys on agents # hst reset-agent
... View more
03-30-2016
06:16 PM
Narasimha, Here are some great docs on Knox, http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_Knox_Gateway_Admin_Guide/content/ch01.html. Also notice the guides posted by others here to help you with the setup. Eric
... View more
03-21-2016
02:26 PM
1 Kudo
Database and table metadata is stored in the Hive Metastore, not in HDFS, so a different approach is needed to restrict this info from being sent to HiveServer2 clients. This feature was added in Hive 1.2.0 by HIVE-9350. You may need to use Ranger to achieve this functionality, which was added in RANGER-238. Both of these features are included in HDP 2.3.0+
... View more
02-27-2016
01:47 AM
@Prakash Punj Did you copy the file locally instead hdfs as I mentioned in my reply?
... View more
02-04-2016
09:48 PM
4 Kudos
Knox is composed of a number of Gateway Services at its core. Among these are one for setting up the Jetty SSL listener (JettySSLService) and another for protection of various credentials (AliasService) in order to keep them out of clear text config. These services are able to leverage each other for various things. The JettySSLService has the AliasService injected in order to get to the protected gateway-identity-passphrase. This is a password that is stored at the gateway level as apposed to the topology or cluster level. While mixing the two concerns would be possible with the functionality of JCEKS, it would inappropriately couple the two services implementations together. There has always been plans to look for a more central and secure credential store. The addition of the Credential Provider API in Hadoop opens up this possibility. Currently, the primary credential provider in Hadoop is still a JCEKS provider. We need a server to sit over a secure storage that you would need to authenticate to in order to get the protected passwords. Once this is available the current design in Knox will allow us to transition to a central credential server without the JettySSLService even being aware.
... View more
11-19-2015
11:01 AM
@Alex Miller Let's file a bug
... View more