Member since
03-01-2016
45
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2989 | 01-19-2018 10:46 PM | |
7941 | 01-18-2017 08:09 PM | |
7749 | 11-21-2016 10:15 PM | |
4837 | 11-07-2016 02:09 AM | |
5514 | 11-04-2016 09:31 PM |
11-07-2016
02:17 AM
@srini one thing you could try is instead of using that one attribute to bucket on, create another attribute that denormalizes all of the attributes into one string, and use that column to bucket on (still leave the other attributes in place). When you have duplicate columns this would lead to those dupes being bucketed under that one column in the subsequent operation. Then the rest of the operations would pick the unique one and then remove the denormalized column. It's a bit of a dance but I think could work. Does that makes sense? One thing I think would be good to get on the radar is upgrading Jolt in NiFi, perhaps once the modify feature upgrades from beta. I think that will help to simplify some of the hoops needed to do this type of work.
... View more
11-07-2016
02:09 AM
1 Kudo
Hi
@srini,
NiFi does support the addition of custom transformations which can be referenced with a "drop in" jar. You could create your own transformation class, which extends the jolt library, that provides the functionality you need and be available to NiFi on the file system. Given that this is a new function in a later version of Jolt I would be careful using that the as a custom jar; that may cause a conflict but I haven't tested this myself to be sure. Below are some use cases on how to apply custom functions in NiFi
Case 1 – Custom Transform Selected
In this case if the Custom option of Transform is selected then 1) a Custom Transform Class Name should be entered and 2) one or more module paths should be provided. The Custom Transform Class Name should be a fully qualified classname (e.g. eu.zacheusz.jolt.date.Dater). The Module Path can take a comma delimited list of directory locations or one or more jar files. Once these fields are populated the Advanced view will support validation and saving of the specification. A user can switch between transformation types in the UI but not custom class names & module paths.
Case 2 – Chain Transformation Selected with Custom Transformations embedded
In this case if you wants to use one or more transforms (that include custom transformations) in your specification then the Chainr spec option can be used with one or more module path provides. In this case the Custom Transform Class Name property is not required and would be ignored (since one or more custom transformations could be invoked in the spec). As in Case 1 the Advanced view will support validation and saving of specification if the required fields are populated for this case. I hope this helps! Please let me know if you have any questions.
... View more
11-04-2016
09:31 PM
3 Kudos
Hi @srini, Try the following with the Chain specification type selected in the processor: [
{ "operation": "shift", "spec": { "*": { "id": "[&1].id" } } },
{ "operation": "shift", "spec": { "*": "@id[]" } },
{
"operation": "cardinality",
"spec": {
"*": {
"@": "ONE"
}
}
},
{ "operation": "shift", "spec": { "*": { "id": "[].id" } } }
] What I did was add a bucket shift operation which sorts same json entries into "buckets", used cardinality to select just one entry and then another shift for the output you need. Please let me know if this works for you.
... View more
10-11-2016
12:44 PM
15 Kudos
Credits to @mgilman for contributing to this article:
Since the introduction of NiFi 1.0.0 administrators have a greater ability to manage policy through the addition of Ranger Integration and a more granular authorization model. This article provides a guide for those looking to define and manage NiFi policies in Ranger. To learn more on configuring NiFi to use Ranger via Ambari please review the parent article HDF 2.0 - Integrating Secured NiFi with Secured Ranger for Authorization Management.
Behind the scenes NiFi uses a REST based API for all user interaction; therefore resource based policies are used in Ranger to define users' level of permissions when executing calls against these REST endpoints via NiFi's UI . This allows administrators to define policies by selecting a NiFi resource/endpoint, choosing whether users have Read or Write (Modify) permissions to that resource, and selecting users who will be granted the configured permission. For example the image below shows a policy in Ranger where a user is granted the ability to View Flows in NiFi’s interface. This was configured by selecting /flow as the authorized resource and granting the selected user the Read permission to that resource.
Example of Global Level Policy Definition with Kerberos User Principal
Policies can be created that will apply authorizations to features at a global level or on a specific component level in NiFi. The following describes the policies that can be defined in Ranger using a combination of the indicated NiFi Resource and Permission (Read or Write).
Global Policies:
Policy
Privilege
NiFi Resource
Permission(s)
View the user Interface
Allows users to view the user interface
/flow
Read
Access the controller
Allows users to view/modify the controller, including Reporting Tasks, Controller Services, and clustering endpoints. Explicit access to reporting tasks and controller services can be overridden
/controller
Read (for View)
Write (for Modify)
Query Provenance
Allows users to submit a Provenance Search and request Event Lineage. Access to actual provenance events or lineage will be based off the data policies of the components that generated the events. This simply allows the user to submit the query.
/provenance
Read
Access Users/User Groups
Allows users to view/modify users and user groups
/tenants
Read (View User/Groups)
Write (Modify Users/Groups)
Retrieve Site-To-Site Details
This policy should be granted to other NiFi systems (or Site-To-Site clients) in order to retrieve the listing of available ports (and peers when using HTTP as the transport protocol). Explicit access to individual ports is still required to see and initiate Site-To-Site data transfer.
/site-to-site
Read (Allow retrieval of data)
View System Diagnostics
This policy should be granted in order to retrieve system diagnostic details including JVM memory usage, garbage collection, system load, and disk usage.
/system
Read
Proxy User Requests
This policy should be granted to any proxy sitting in front of NiFi or any node in the cluster that will be issuing requests on a user's behalf.
/proxy
Write (granted on Node Users defined in Ranger)
Access Counters
This policy should be granted to users to retrieve and reset counters. This policy is separated from each individual component has the counters can also be rolled up according to type.
/counters
Read (Read counter information)
Write (Reset Counters)
Note: Setting authorizations on the /policy resource is not applicable when using Ranger since NiFi’s policy UI is disabled when Ranger Authorization enabled.
Component Policies
Component Level policies can be set in Ranger on individual components on the flow within a process group or on the entire process group (with the root process group being the top level for all flows). Most components types (except for connections) can have a policy applied directly to it. For example the image below demonstrates a policy defined for a specific processor instance (noted by the unique identifier included in the resource url) which grants Read and Modify permissions to the selected user.
Example of Component Level Policy for Kerberos User Principal If no policy is available on the specific component then it will look to the parent process group for policy information. Below are the available resources for components where a specific policy can be applied to an instance of that component. Detailed information on component descriptions can be found in NiFi Documentation.
Component Type
Resource (Rest API)
Description (from NiFi Documentation)
Controller Service
/controller-services
Extension Point that provides centralized access data/configuration information to other components in a flow
Funnel
/funnels
Combine data from several connections into one connection
Input Port
/input-ports
Used to receive data from other data flow components
Label
/labels
Documentation for flow
Output Port
/output-ports
Used to send data to other data flow components
Processor
/processor
NiFi component that pulls data from or publishes to external sources, or route, transforms or extracts information from flow files.
Process Group
/process-groups
An abstract context used to group multiple components (processors) to create a sub flow. Paired with input and output ports process groups and be used to simplify complex flows and use logical flows
Reporting Task
/reporting-tasks
Runs in the background and provides reporting data on NiFi instance
Template
/templates
Represents a predefined dataflow available for reuse within NiFi. Can be imported and exported
The following table describes the types of policies that can be applied to the previously mentioned components. Note: UUID is the unique identifier of an individual component within the flow.
Policy
Description
REST API
Read or Update Component
This policy should be granted to users for retrieving component configuration details and modifying the component.
Read/Write on:
/{resource}/{uuid}
e.g.
/processor/{uuid}
View Component Data or Allow Emptying of Queues and Replaying
This policy should be granted to users for retrieving or modifying data from a component. Retrieving data will encompass listing of downstream queues and provenance events. Modifying data will encompass emptying of downstream queues and replay of provenance events. Additionally, data specific endpoints will require every link in the request chain is authorized with this policy. Since they will be traversing each link, we need to ensure that each proxy is authorized to have the data.
Read/Write on:
/data/{resource}/{uuid}
Write Receive Data, Write Send Data
These policies should be granted to other NiFi instances and Site-To-Site clients that will be sending/receiving data from the specified port. Once a client has been added to a port specific Site-To-Site policy, that client will be able to retrieve details about this post and initiate a data transfer. Additionally, data specific endpoints will require every link in the request chain is authorized with this policy. Since the will be traversing each link, we need to ensure that each proxy is authorized to have the data.
Write on:
/data-transfer/input-ports/{uuid}
/data-transfer/output-ports/{uuid}
For more information on Authorization configuration with Ranger and NiFi please review
http://bryanbende.com/development/2016/08/22/apache-nifi-1.0.0-using-the-apache-ranger-authorizer https://community.hortonworks.com/articles/57980/hdf-20-apache-nifi-integration-with-apache-ambarir.html
... View more
10-05-2016
02:52 PM
15 Kudos
UPDATE: This article has been vetted against HDF 2.0 - HDF 3.2. Minor updates have been made for additional clarity on use of NiFi CA for establishing trust with Ranger.
Prerequisites
NiFi has been installed running with SSL enabled (with certificates manually installed or using the NiFi Certificate Authority). You will need the keystore/truststore names, locations, aliases, identity (DN) and passwords used when enabling SSL for NiFi. Ensure that all nodes have the same keystore/truststore passwords applied and like named locations in order to apply ranger nifi plugin configurations consistently via Ambari. Ranger has been installed and configured with security enabled. For instructions on setting up SSL for Ranger please review Configure Ambari Ranger SSL . Note the name, location, aliases, identity (DN) and passwords used when creating keystores and truststores for Ranger If Kerberos will be used recommend that this is enabled for the HDF cluster before proceeding.
Part 1 - Establishing trust between NiFi nodes and Ranger
In order for NiFi nodes to communicate over SSL with Ranger, and Ranger to communicate with secured NiFi, certificates should be imported from the Ranger host to NiFi nodes and vice versa. In these instructions we will use the same keystore/truststore used to secure Ranger in order to communicate with NiFi; however it is possible to also generate additional keystore/truststores that are dedicated solely to NiFi communication.
1. Create certificate files from Ranger’s keystore – Use the following command to generate a certificate file:
{java.home}/bin/keytool -export -keystore {ranger.keystore.file} -alias {keystore-alias} -file {cert.filename}
Example:
/usr/jdk64/jdk1.8.0_77/bin/keytool -export -keystore /etc/security/certs/ranger/ranger-admin-keystore.jks -alias rangeradmin -file /etc/security/certs/ranger/ranger-admin-trust.cer
2. Import the generated Ranger certificate file into the trust stores for all nifi nodes in the cluster:
{java.home}/bin/keytool -import -file {ranger.cert.filename} -alias {ranger.keystore.alias} -keystore {nifi.node.ssl.truststore} -storepass {nifi.node.ssl.truststore.password}
Example:
/usr/jdk64/jdk1.8.0_77/bin/keytool -import -file /etc/security/certs/ranger/ranger-admin-trust.cer -alias rangeradmin -keystore /usr/hdf/current/nifi/conf/truststore.jks –storepass {nifi.truststore.password}
3. Create certificate files for import into Ranger’s trust store. There are two ways to approach this:
a) If NiFi Certificate Authority is in use, a certificate from the CA can be generated and imported into Ranger's trust store using the following steps:
i) Create a certificate file from NiFi-CA using command below:
{java.home}/bin/keytool -export -keystore {nifi-ca.keystore.file} -alias {nifi-ca.keystore-alias} -file {nifi-ca-cert.filename}
ii) Import the NiFi CA certificate into Ranger's truststore* using the below command:
{java.home}/bin/keytool -import -file {nifi-ca.cert.filename} -alias {nifi-ca.keystore.alias} -keystore {ranger.ssl.truststore} -storepass {ranger.ssl.truststore.password}
b) If an external CA or self signed certificates are used and manual keystores and truststores were provided for each NiFi node then perform the following:
i) Create certificate files from each nifi node's keystore using the command below:
{java.home}/bin/keytool -export -keystore {nifi.keystore.file} -alias {nifi.keystore-alias} -file {cert.filename}
ii) Import the nifi certificate files into Ranger's truststore*(Repeat for each cert generated. Remember any duplicate aliases may need to be changed using changealias command before importing new ones.)
{java.home}/bin/keytool -import -file {nifi.cert.filename} -alias {nifi.keystore.alias} -keystore {ranger.ssl.truststore} -storepass {ranger.ssl.truststore.password}
*NOTE truststore used by Ranger may be default truststore cacerts file located under {java_home}/jre/lib/security/cacerts*
Part 2 – Enabling Ranger NiFi Plugin via Ambari
Enabling the Ranger-NiFi Plugin should lead to Ambari creating a Service Repository entry in Ranger which will store information for Ranger to communicate with NiFi and store the authorized identity of the NiFi node[s] that will communicate with Ranger.
From the Ambari UI perform the following steps:
1. Under the Ranger configuration section go to the “Ranger Plugin” tab and switch the NiFi Ranger Plugin toggle to “ON”. When prompted save the configuration.
2. If Ranger Auditing will be used, under the Ranger configuration section go to the “Ranger Audit” tab and, if not already enabled, switch the “Audit to Solr” toggle to “On”. This will produce options to enter connection properties for a Solr instance. To use with Ambari Infra (Internal SolrCloud) enable the “SolrCloud” toggles to “ON” as well. Ambari will pre-populate the zookeeper connection string values and credentials. If an External Solr is used the connection values will need to be provided. When prompted save the configuration.
3. Under the NiFi configuration screen go to the ranger-nifi-plugin-properties section. This section stores all the information needed to support Ranger communication with NiFi (to retrieve NiFi REST endpoint data) .
Complete the following in the ranger-nifi-plugin-properties section:
a) Confirm that “Ranger repository config password” and “Ranger repository config user” are pre-populated. These values are set by default by Ambari and refer to Ranger’s admin user and password
b) Authentication - Enter “SSL” if not already detected and pre-populated by Ambari. This will indicate to Ranger that NiFi is running with SSL
c) Keystore for Ranger Service Accessing NiFi - Provide the keystore filename with location path that Ranger will use for SSL communications with NiFi. This should correspond to the keystore used to generate a certificate in Part 1, Step 1.
d) Keystore password - Enter the password for the above keystore
e) Keystore Type – Enter the keystore type for the provided keystore (e.g. JKS)
f) Truststore for Ranger Service Accessing NiFi – Enter the filename with location path of the truststore for the Ranger service
g) Truststore password – Enter the password for the above truststore
h) Truststore type – Enter the truststore type for the provided truststore (e.g. JKS)
i) Owner for Certificate – Enter the identity (Distinguished Name or DN) of the certificate used by ranger
j) Policy user for NiFi – This should be set by default to the value “nifi”
k) Enable Ranger for NiFi – This should be checked (enabled to true)
4. Next go to the ranger-nifi-policymgr-ssl section. This section stores the information NiFi will use to communicate with the secured Ranger service.
Complete the following in the ranger-nifi-policymgr-ssl section:
a) owner.for.certificate – Enter the identity (Distinguished Name or DN) of the nifi node(s) that will communicate with Ranger. Referring to multiple nodes identities this value use regex by adding a regex prefix along with the expression (E.g.: CN=regex:ydavis-kb-ranger-nifi-demo-[1-9]\.openstacklocal, OU=NIFI to match multiple DN using 1 through 9). This value is not required if Kerberos is enabled on HDF. Update: This regular expression feature will be available in HDF 2.0.1.
b) xasecure.policymgr.clientssl.keystore – Enter the keystore location and filename that NiFi will use to communicate with Ranger. This keystore reference should be the same file used to create and import a certificate into Ranger. (Ensure that for multi-node cluster this keystore location is consistent across all nifi node hosts)
c) xasecure.policymgr.clientssl.keystore.credential.file – This value is populated by default and is used by the plugin to generate a file to store credential information. No change to this value is required.
d) xasecure.policymgr.clientssl.truststore – Enter the truststore location and filename that NiFi will use to communicate with Ranger.
e) xasecure.policymgr.clientssl.truststore.credential.file - This value is populated by default and is used by the plugin to generate a file to store credential information. No change to this value is required.
f) xasecure.policymgr.clientssl.truststore.password – Enter the password for the provided truststore file.
5. The other two sections for Ranger NiFi plugin (ranger-nifi-security and ranger-nifi-audit) do not require additional configuration however can be reviewed for the following:
Confirm the following in ranger-nifi-audit section:
a) Audit to SOLR is enabled (if Ranger Audit was enabled in Part 2, Step 2)
b) xasecure.audit.destination.solr.urls is completed (if an external Solr instance was referenced in Step 2)
c) xasecure.audit.destination.solr.zookeepers is completed and matches the connection string (if Ambari Infra or external SolrCloud was enabled in Step 2)
d) xasecure.audit.is.enabled is set to true
Confirm the following in the ranger-nifi-security section:
a) ranger.plugin.nifi.policy.rest.ss.config.file is set to ranger-policymgr-ssl.xml
b) ranger.plugin.nifi.policy.rest.url refers to the ambari variable for Ranger service {{policy_mgr_url}} (any replacement here means that a Ranger service external to the HDF installation is the target)
6. Save all NiFi configuration changes
7. Restart all required services and ensure that Ambari indicates services have restarted successfully
Part 3 – Confirm Ranger Configuration and Setting up Policies
1. Go to the Ranger Admin UI and using the Access Manager menu select “Resource Based Policies”. Confirm that an entry for NiFi exists in the NiFi Service Manager. The entry name is dynamically created based on the Ambari cluster name (see example below).
2. Select the edit button next to the service repository entry and confirm that the properties from the ranger-nifi-plugin-properties are accurately populated. Also confirm the NiFi URL provided (usually 1 node is used)
3. Confirm that the commonNameForCertificate value is the CN value from the Owner for Certificate property from ranger-nifi-plugin-properties.
4. Using the Ranger Menu go to the “Audit” screen and select the plugin tab. You should see one or more entries from each node in the cluster showing NiFi syncing with ranger policies.
5. If not using Usersync in Ranger, manually create new users in Ranger which correspond to the authentication method used to secure NiFi. For example when using Kerberos Authentication in NiFi ensure that the users created match with the Kerberos principal.
To create a user perform the following tasks:
a) In the Ranger Admin go to Settings menu and select “User/Groups”
b) Click the “Add New User” button
c) Complete the User Detail screen providing the User Name as the identity for the appropriate NiFi authentication method (e.g. Client DN, LDAP DN or Kerberos principal). Password and First Name is required by Ranger but is not used by NiFi. The Role selected should be User (groups are not used by the plugin at this time)
d) Save the new user and repeat for any other users who need access to NiFi
6. Users entries must also be created for each node in the NiFi cluster. Repeat the “Add User” step; however for User Name provide the Distinguished Name for each node as shown below:
7. In order for NiFi nodes to be authorized to communicate within the cluster a Proxy policy should be created. In the Ranger Access Manager menu select “Resource Based Policies” and select the NiFi service repository entry link. On the policy screen select the “Add New Policy” button
8. Provide the following for Policy Details:
a) Policy Name – provide a name for the policy (e.g. proxy)
b) NiFi Resource Identifier – Enter “/proxy” for NiFi’s proxy endpoint
c) Audit Logging – should be set to yes if logging was previously enabled
d) Allow conditions section – In the “Select User” field choose each NiFi node user that was previously created. For the Permissions field select “Write”.
e) Add the new policy
9. Once these authorizations have been created, it is now possible to confirm that Ranger can communicate with NiFi (attempting to do so before nodes were authorized would result in a communication error). Go to the Access Manager menu, select “Resource Based Policies” and select the edit button next to the NiFi service repository entry link.
10. Click on the Test Connection button located just below the Config Properties entries. Ranger should be able to Connect to NiFi successfully.
11. Now configure other user policies for accessing NiFi. To configure a NiFi admin/super user (or admin rights), a user can be added to the all – nifi-resource policy that was created by default. In the Ranger Access Manager menu select “Resource Based Policies” and select the NiFi service repository entry link. Then select the edit button next to the “all – nifi-resource” entry.
12. In the Allow Conditions section select the user(s) which will be applied to this policy. Also add both Read and Write permissions.
13. Save the policy with the new settings and confirm that the configured user can access NiFi with given rights by logging into NiFi on a node. Repeat login on each node in cluster to confirm policy is applied throughout.
14. Confirm that login access as well as proxy communication of nodes were audited in Ranger using the Audit screen and navigating to the “Access” tab
At this point Ranger can be used to administer policy for NiFi.
Troubleshooting
If there are problems with NiFi communication with Ranger, review the xa_secure.log (on Ranger's installation) as well as NiFi's nifi-app.log to determine the source of the issue. Many times this is due to certificates not being imported into Ranger's truststore or, if kerberos was not enabled, the commonNameForCertificate (in the NiFi service repository entry in Ranger) value is inaccurate.
If there are problems with Ranger communication with NiFi this also could be due to certificates not being imported into NiFi nodes or the Ranger certificate not appropriately being identified. The in addition to the previously mentioned logs the nifi-user.log will be useful to review as well.
... View more
09-29-2016
05:06 PM
If you are using NiFi 1.0.0 with HDF 2.0 there is a plan for another release (e.g. v. 2.0.1) that should include this fix. In the meantime if you are using NiFi outside of HDF the current course of action is to download the actual NiFi source and build it separately (unfortunately there isn't a specific patch file to apply to an existing installation). Here is the quick start guide that has information on building from source. I hope this helps and thank you for highlighting this issue.
... View more
09-29-2016
04:37 PM
5 Kudos
Hi @Adda Fuentes you are correct this issue was documented and a fix was merged into nifi's master repo recently. https://issues.apache.org/jira/browse/NIFI-2824
... View more
09-19-2016
11:15 PM
10 Kudos
NiFi has previously supported the ability to refer to flow
file attributes, system properties and environment properties within expression
language (EL); however the community requested an enhancement to also support
custom properties. This would give users even more flexibility either in
processing, handling flow content, or even in flow configuration (e.g.
referring to a custom property in EL for connection, server or service
properties).
In NiFi versions 0.7 & 1.0.0 an enhancement was added to
allow administrators to define custom property files on nodes within their
cluster and configure NiFi with their location so those properties could be
loaded and available within EL. A new
field in the nifi.properties file (
nifi.variable.registry.properties)
is available for an administrator to set the paths of one or more custom
properties files for use by NiFi.
Figure 1
- Custom Properties reference in nifi.properties
Once the nifi.properties file is updated custom attributes can
be used as needed. NOTE: custom properties should contain distinct property
values in order to ensure they won’t be overridden by other property files or
by existing environment, system or flow file attributes.
For demonstration I have a flow that demonstrates use of
custom properties in EL with UpdateAttributes processor and the PutFile
processor
Figure 2
- Test Flow Writing Custom Attribute Data
Figure 3
- UpdateAttribute Advanced Configuration
Figure 4
- PutFile Config Screen with Directory using Custom Property in Expression
The output of this flow saves attributes created from custom
property values to a folder location that is also defined by a custom property.
This custom properties enhancement sets the stage for
developing a richer Variable Registry that will provide even more flexibility
in custom property management providing UI driven administration, variable
scope management and more.
For testing the flow in the above example, a template and referenced properties are
available here: https://gist.github.com/YolandaMDavis/364307c1ab5fe89b2edcef5647180873
... View more
Labels:
08-31-2016
03:04 PM
1 Kudo
Hi @Sami Ahmad, It looks as if the download link is to the actual NiFi template you'll need for the tutorial (called ServerLogGenerator.xml). NiFi templates are xml based. You can actually save that file locally on your machine (just do a save as through the browser) and follow the instructions to upload the template you saved locally into NiFi. I hope this helps! P.S. Here's another tutorial where you can learn NiFi essentials: http://hortonworks.com/hadoop-tutorial/learning-ropes-apache-nifi/#section_4
... View more
07-27-2016
01:33 PM
2 Kudos
Hi @Utkarsh Garg I was able extract a list of hashtags using the EvaluateJsonPath processor with the JsonPath you posted $.entities.hashtags[*].text
If your goal is to only further process the hash tag data you can configure the EvaluateJsonPath processor to place the matched hashtags from the tweet and populate it into flowfile-content. If you want more than just hashtags you can configure the processor to populate the results in flow-file attritbutes and create an attribute for all the data you want to extract from the tweet. That can be later paired with the AttributesToJson processor where you can pull all the attributes you matched in EvaluateJsonPath into a json object which can then become flowfile content.
Another approach could be using the JoltTransformJSON processor which will allow you to convert your incoming tweet to another structure. In your case if you wish to simply extract the hashtags you can use that processor's "Shift" transformation operation with the following specification (which simply defines how you want output to look): {
"entities": {
"hashtags": {
"*": {
"text": "hashtext.[]"
}
}
}
}
The above spec would extract the hashtags and create a single json object with an array of hashtags called hashtext. I hope this helps! Also I can try to post a snapshot of my configuration (the system's preventing me to upload at the moment).
... View more
- « Previous
- Next »